[HN Gopher] Apollo 11 vs. USB-C Chargers (2020)
       ___________________________________________________________________
        
       Apollo 11 vs. USB-C Chargers (2020)
        
       Author : raimue
       Score  : 467 points
       Date   : 2023-12-27 02:21 UTC (20 hours ago)
        
 (HTM) web link (forrestheller.com)
 (TXT) w3m dump (forrestheller.com)
        
       | kristopolous wrote:
       | When we go back to the moon, I wouldn't be surprised if
       | Zilog-Z80s were a major part of the hardware. Well known, well
       | understood, predictable hardware goes a long way. There's a bunch
       | of other considerations in outer space and z80s have proven
       | robust and reliable there. Also I'd expect a bunch of Kermit and
       | Xmodem to be used as well.
        
         | monocasa wrote:
         | I'm not sure if there's a RAD hard z80 variant.
         | 
         | They've got their own chips and protocols going back just as
         | far, like https://en.wikipedia.org/wiki/MIL-STD-1553
        
           | kristopolous wrote:
           | The space shuttle used both z80 and 8086 until it ended in
           | 2011. The international space station runs on among other
           | chips, 80386SX-20s. IBM/BAE also has a few RADs based on
           | POWER chips.
        
             | monocasa wrote:
             | Do you have a citation for that?
             | 
             | The Space Shuttle Avionics System top level documentation
             | specifically calls out having "no Z80's, 8086s, 68000's,
             | etc."
             | 
             | https://ntrs.nasa.gov/api/citations/19900015844/downloads/1
             | 9...
        
               | kristopolous wrote:
               | Intel claims they did. https://twitter.com/intel/status/4
               | 97927245672218624?lang=en although what's that word
               | "some" doing in there...
               | 
               | And also, sigh, to demonstrate once again that when I
               | worked in space it was 20 years ago,
               | https://www.nytimes.com/2002/05/12/us/for-parts-nasa-
               | boldly-... (https://web.archive.org/web/20230607141742/ht
               | tps://www.nytim...)
               | 
               | Knowing how 8086 timing and interrupts worked was still
               | important for what I was doing in the early 2000s. I
               | don't pretend to remember any of it these days.
        
         | dogma1138 wrote:
         | I doubt so and if they will they'll be abstracted to hell
         | behind modern commodity hardware, Apollo had no bias when it
         | comes to HDI/MMIs so astronauts could be trained on the
         | computer interface that was possible at the time.
         | 
         | The reason why the controls of Dragon and Orion look the way
         | they do is that they are no far off from modern digital
         | cockpits of jets like the F-22 and F-35 and everyone is used to
         | graphical interfaces and touch controls.
         | 
         | Having non intuitive interfaces that go against the bias
         | astronauts and later on civilian contractors already have by
         | using such interfaces over the past 2 decades will be
         | detrimental to overall mission success.
         | 
         | The other reason for why they'll opt to use commodity hardware
         | is that if we are going back to space for real now you need to
         | be able to build and deploy systems at an ever increasing pace.
         | 
         | We have enough powerful human safety rated hardware from
         | aerospace and automotive there is no need to dig up relics.
         | 
         | And lastly you'll be hard pressed to find people who still know
         | how to work with such legacy hardware at scale and unless we
         | will drastically change the curriculum of computer science
         | degrees around the US and the world that list would only get
         | smaller each year. We're far more likely to see ARM and RISC-V
         | in space than z80's.
        
         | GlenTheMachine wrote:
         | They won't be. We will use RAD750's, the flight qualified
         | variant of the PowerPC architecture. That's the standard high
         | end flight processor.
         | 
         | https://www.petervis.com/Vintage%20Chips/PowerPC%20750/RAD75...
         | 
         | The next generation (at least according to NASA) will be RISC-V
         | variants:
         | 
         | https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-...
        
           | kristopolous wrote:
           | The 750 is still based on a 27 year old chip and runs at half
           | its clockspeed. The point was that spaceflight is relatively
           | computationally modest.
        
             | demondemidi wrote:
             | Reliability is more important. Even more problematic is
             | that many semi companies have been funneled into just a few
             | due decades of mergers. And all of these are chasing
             | profits which means jettisoning RAD hard mil spec devices.
             | Up until the early 2000s intel was still making hardened
             | versions of the 386, now they make no milspec parts.
        
           | johnwalkr wrote:
           | I wouldn't call it the standard, it's just used in designs
           | with legacy to avoid the huge cost of re-qualification of
           | hardware and software. It's infeasible a lot of times due to
           | cost and power consumption. I work in the private sector in
           | space (lunar exploration actually) and everyone is qualifying
           | normal/automotive grade stuff, or using space-grade
           | microcontrollers for in-house designs, with everything from
           | 8-32bit, [1] and ready-made cpu boards[2] for more complex
           | use cases. I'm sharing just 2 examples but there are
           | hundreds, with variations on redundancy implemented in all
           | kinds of ways too, such as in software, on multiple cores, on
           | multiple chips, or on multiple soft-cpu cores on a single or
           | multiple FPGAs.
           | 
           | [1] Example: https://www.militaryaerospace.com/computers/arti
           | cle/16726923...
           | 
           | [2] Example: https://xiphos.com/product-details/q8
        
         | johnwalkr wrote:
         | I work in this space and z80, kermit and xmodem are not part of
         | the solution. Just because this stuff is simple to the user
         | doesn't mean it's the best suited, and there's a whole industry
         | working on this since the Z80 days. You can buy space-qualified
         | microcontroller boards/components with anything from a simple 8
         | bit microcontroller to a 64-bit, multicore, 1Ghz+ ARM cpu
         | depending on the use case. I'm sure Z80 has been used in space,
         | but in my time in the industry I've never heard of it.
         | 
         | Kermit and xmodem probably aren't what you want to use, they
         | are actually a higher level than what is normally used and
         | would require a big overhead, if they even worked at all with
         | latencies that can reach 5-10s. Search for the keyword "CCSDS"
         | to get hints about data protocols used in space.
        
           | kristopolous wrote:
           | I worked in it 20 years ago building diagnostic and
           | networking tools ... arm was certainly around but there was
           | also what I talked about. Things probably changed since then.
           | 
           | Here's kermit in space ... coincidentally in a 20 year old
           | article. Software I wrote supported diagnosing kermit errors.
           | 
           | https://www.spacedaily.com/news/iss-03zq.html
           | 
           | I guess now I'm old.
        
             | johnwalkr wrote:
             | Thanks for the reference! Kermit could be used locally on
             | ISS or in a lunar mission now that I think about it, but so
             | is/could SSH, web browsers or any modern technology. But
             | most space exploration is robotic and depends on
             | communication to ground stations on Earth, and that is
             | fairly standardized. Perhaps kermit will be used on the
             | lunar surface, and that will be a simplification compared
             | to a web browser interface. But for communication to/from
             | Earth and Moon, there are standards in place and it would
             | be a complication, not simplification to add such a
             | protocol.
        
               | kristopolous wrote:
               | oh who knows ... I stopped working on that stuff in I
               | think 2006. The big push then was over to CAN and
               | something called AFDX which worked over ethernet. I was
               | dealing with octal and bcd daily, mid 2000s.
               | 
               | I have no idea if people still use ARINC-429 or IRIG-B.
               | Embedded RTOS was all proprietary back then for instance,
               | like with VXWORKS. I'm sure it's not any more. I hated
               | vxworks.
        
               | lambda wrote:
               | Yeah, I'm working on a fly by wire eVTOL project, we are
               | using the CAN bus as our primary bus, but there are a
               | number of off the shelf components like ADAHRS that we
               | use that talk ARINC-429 so our FCCs will have a number of
               | ARINC-429 interfaces.
               | 
               | But at least for the components we're developing, we have
               | basically standardized on ARM, the TMS570 specifically
               | since it offers a number of features for safety critical
               | systems, and simplifies our tooling and safety analysis
               | to use the same processor everywhere.
               | 
               | Z80 is pretty retro, and while I'm sure there may be some
               | vendors who still use it, it's got to be getting pretty
               | rare for new designs, between all the PowerPC, Arm, and
               | now RISC-V processors available that allow you to use
               | modern toolchains and so on, I'd be surprised if many
               | people were doing new designs with the Z80
        
         | atleta wrote:
         | It seems it's going to be a new, RISC-V based chip:
         | 
         | [1] https://www.zdnet.com/article/nasa-has-chosen-these-cpus-
         | to-... [2] https://www.nasa.gov/news-release/nasa-awards-next-
         | generatio...
        
       | m463 wrote:
       | It is amazing they were able to miniaturize a computer to fix
       | into a spaceship.
       | 
       | Previously calculators were a room full of people, all of which
       | required food, shelter, clothing and ... oxygen.
        
       | nolroz wrote:
       | Hi Forrest!
        
       | orliesaurus wrote:
       | 54 years ago - wow - was the Apollo 11 Moon Landing Guidance
       | Computer (AGC) chip the best tech had to offer back then?
        
         | GlenTheMachine wrote:
         | Yes, given the size, power, and reliability constraints. There
         | were, of course, far more powerful computers around... but not
         | ones you could fit in a spacecraft the size of a Volkswagen
         | Beetle.
         | 
         | The Apollo program consumed something like half of the United
         | States' entire IC fabrication capacity for a few years.
         | 
         | https://www.bbc.com/future/article/20230516-apollo-how-moon-...
        
           | db48x wrote:
           | The AGC was 2 ft3. I believe the volume was written into the
           | contract for development of the computer, and was simply a
           | verbal guess by the owner of the company during negotiations.
           | On the other hand, they had been designing control systems
           | for aircraft and missiles for over a decade at that point so
           | it was not an entirely uninformed guess.
           | 
           | The amazing thing is that they did manage to make it fit into
           | 2 ft3, even though the integrated circuits it used had not
           | yet been invented when the contract was written.
        
         | dgacmu wrote:
         | Yes when accounting for size. If you wanted something that was
         | the size of a refrigerator, you could buy a data general Nova
         | in 1969: https://en.m.wikipedia.org/wiki/Data_General_Nova
         | 
         | 8KB of RAM! But hundreds of pounds vs 70lb for the AGC with
         | fairly comparable capability (richer instructions/registers,
         | lower initial clock rate).
         | 
         | The AGC was quite impressive in terms of perf/weight
        
         | kens wrote:
         | The Apollo Guidance Computer was the best technology when it
         | was designed, but it was pretty much obsolete by the time of
         | the Moon landing in 1969. Even by 1967, IBM's 4 Pi aerospace
         | computer was roughly twice as fast and half the size, using TTL
         | integrated circuits rather than the AGC's RTL NOR gates.
        
       | AnotherGoodName wrote:
       | Pretty much all USB chips have a fully programmable CPU when you
       | go into the data sheets. It feels silly for simple hid or
       | charging devices but basic microcontrollers are cheap and
       | actually save costs compared to asics.
        
         | hinkley wrote:
         | I still want to see postgres or sqlite running straight on a
         | storage controller some day. They probably don't have enough
         | memory to do it well though.
        
           | lmm wrote:
           | Booting Linux on a hard drive was what, 15 years ago now?
        
             | hinkley wrote:
             | Have you ever tried to google that?
        
               | lmm wrote:
               | Yes - up until a few years ago it was easy to find by
               | googling, but now google has degraded to the point where
               | I can't manage it.
        
               | winrid wrote:
               | Do you mean this?
               | https://spritesmods.com/?art=hddhack&page=1
               | 
               | Searched "run linux on hard drive without cpu or ram" on
               | Google - third result.
        
               | Dah00n wrote:
               | Yeah sure, people on HN says this all the time, but in
               | reality it isn't true like a lot of comments getting
               | repeated on here. I found it on the first try.
        
         | petermcneeley wrote:
         | I would also argue that this is another example of software
         | eating the world. The role of the electrical engineer is
         | diminished day by day.
        
           | etrautmann wrote:
           | Nah - there are lots of places where you need EEs still.
           | Anything that interfaces with the world. Having
           | programmability does not move most challenges out of the
           | domain of EE. Much of it is less visible than the output of a
           | software role perhaps.
        
             | FredPret wrote:
             | There will always be problems that can only be solved by an
             | EE, chem eng, mech eng, etc.
             | 
             | But the juiciest engineering challenges involve figuring
             | out business logic / mission decisions. This is done
             | increasingly in software while the other disciplines
             | increasingly make only the interfaces.
        
           | FredPret wrote:
           | The role of the non-software engineer, bit just electrical
        
           | Almondsetat wrote:
           | the role of the electrical engineer who doesn't know a thing
           | about programming is diminished day by day*
        
         | Shawnj2 wrote:
         | Where I work they were considering using an FPGA over an MCU
         | for a certain task but decided against it because the FPGA
         | couldn't reach the same low power level as the MCU
        
         | dclowd9901 wrote:
         | Yeesh. Beware the random wall wart I guess.
        
           | masklinn wrote:
           | Wall warts are not even the biggest worry:
           | https://shop.hak5.org/products/omg-cable
        
       | vlovich123 wrote:
       | Is the weight/cost calculus sufficiently improved now that it's
       | cheaper to shield the processor in it's entirety rather than
       | trying to rad harden the circuitry itself (much more expensive
       | due to inability to use off the shelf parts & limits the ability
       | to use newer tech)?
       | 
       | If I recall correctly this was one of the areas being explored by
       | the mars drone although not sure if Mars surface radiation
       | concerns are different than what you would use in space.
        
         | thebestmoshe wrote:
         | Isn't this basically what SpaceX is doing?
         | 
         | > The flight software is written in C/C++ and runs in the x86
         | environment. For each calculation/decision, the "flight string"
         | compares the results from both cores. If there is a
         | inconsistency, the string is bad and doesn't send any commands.
         | If both cores return the same response, the string sends the
         | command to the various microcontrollers on the rocket that
         | control things like the engines and grid fins.
         | 
         | https://space.stackexchange.com/a/9446/53026
        
           | jojobas wrote:
           | That sounds way too low. Modern fly-by-wire planes are said
           | to have 12-way voting.
        
             | somethingsaid wrote:
             | If you read the link it's actually two cpu cores on a
             | single cpu die each returning a string. Then 3 of those
             | cpus send the resulting string to the microprocessors which
             | then weigh those together to choose what to do. So it's 6
             | times redundant in actuality.
        
               | SV_BubbleTime wrote:
               | That's not 6x though.
               | 
               | It's a more solid 3x or 3x+3y, which... if you had a
               | power failure at a chip doesn't take a 6x to make it 5x.
               | It makes it 4x with the two remaining PHY units because
               | two logical cores went down with one error.
               | 
               | The x being physical units, and the y being CPUs in
               | lockstep so that the software is confirmed to not bug out
               | somewhere.
               | 
               | It's 6x for the calculated code portion only, but 3x for
               | CPU and 1-3x for power or solder or circuit board.
               | 
               | I know it's pretty pedantic, but I would call it the
               | lowest form for any quality, which is likely 2-3x.
        
             | dikei wrote:
             | It's more complicated than that, in the link, they
             | described it better:
             | 
             | >> The microcontrollers, running on PowerPC processors,
             | received three commands from the three flight strings. They
             | act as a judge to choose the correct course of actions. If
             | all three strings are in agreement the microcontroller
             | executes the command, but if 1 of the 3 is bad, it will go
             | with the strings that have previously been correct.
             | 
             | This is a variation of Byzantine Tolerant Concensus, with a
             | tie-braker to guarantee progress in case of absent voter.
        
               | mcbutterbunz wrote:
               | I'm curious how often the strings are not in agreement.
               | Is this a very rare occurrence or does it happen often?
        
               | denton-scratch wrote:
               | > Byzantine Tolerant Concensus
               | 
               | I was taken to task for mis-spelling "consensus"; I used
               | to spell it with two 'c's and two 's's, like you. It was
               | explained to me that it's from the same root as
               | "consent", and that's how I remember the right spelling
               | now.
        
             | p-e-w wrote:
             | I don't understand this. If two or more computers fail in
             | the same way simultaneously, isn't it much more likely that
             | there is a systemic design problem/bug rather than some
             | random error? But if there is a design problem, how does
             | having more systems voting help?
        
               | etrautmann wrote:
               | The multi processor voting approach seeks to solve issues
               | introduced by bit flips caused by radiation, not
               | programming issues.
        
               | gurchik wrote:
               | Having at least 3 computers allows you the option to
               | disable a malfunctioning computer while still giving you
               | redundancy for random bit flips or other environmental
               | issues.
        
               | GuB-42 wrote:
               | It is possible for a random error to affect two computers
               | simultaneously, if they are made from the same assembly
               | line, they may fail in exactly the same way, especially
               | if they share the same wires.
               | 
               | That's the reason I sometime see that for RAID systems,
               | it is recommended to avoid buying all same disks at the
               | same time, because since they will be used in the same
               | way in the same environment, there is a good chance for
               | them to fail at the same time, defeating the point of a
               | redundant system.
               | 
               | Also, to guard against bugs and design problems, critical
               | software is sometimes developed twice or maybe more by
               | separate teams using different methods. So you may have
               | several combinations of software and hardware. You may
               | also have redundant boards in the same box, and also
               | redundant boxes
        
               | adastra22 wrote:
               | They are not going to fail the same way simultaneously.
               | This is protecting against cosmic ray induced signal
               | errors within the logic elements, not logic errors due to
               | bad software.
        
               | jojobas wrote:
               | Which is why different sets of computers will run
               | software developed by independent groups on different
               | principles, so that they very unlikely to fail
               | simultaneously.
        
             | jcalvinowens wrote:
             | > Modern fly-by-wire planes are said to have 12-way voting
             | 
             | Do you have a source for that? Everything I've ever read
             | about Airbus says the various flight control systems are
             | doubly redundant (three units). Twelve sounds like it would
             | be far beyond diminishing returns...
        
               | jojobas wrote:
               | That was word of mouth. This website says 5 independent
               | computers, of which 2 use different hardware and software
               | so as not to fail in the same fashion.
               | 
               | https://www.rightattitudes.com/2020/04/06/airbus-flight-
               | cont...
               | 
               | I'd imagine every computer relies on redundant
               | stick/pedal encoders, which is how a 12-way notion
               | appeared.
        
               | jcalvinowens wrote:
               | That blog isn't very authoritative, and doesn't go into
               | any detail at all.
               | 
               | > I'd imagine every computer relies on redundant
               | stick/pedal encoders, which is how a 12-way notion
               | appeared.
               | 
               | That's disingenuous at best. The lug nuts on my car
               | aren't 20x redundant... if you randomly loosen four,
               | catastrophic failure is possible.
        
               | numpad0 wrote:
               | This shallow dismissal sounds "sus". It's just off.
        
           | gumby wrote:
           | Seems risky. I remember the automated train control system
           | for the Vienna Hauptbahnhof (main train station) had an x86
           | and a SPARC, one programmed in a procedural language and one
           | in a production language. The idea was to make it hard to
           | have the same bug in both systems (which could lead to a
           | false positive in the voting mechanism).
        
             | ThePowerOfFuet wrote:
             | This is a great technique to avoid common-mode failures.
        
               | kqr wrote:
               | Do you have data to back that claim up? I remember
               | reading evidence to the contrary, namely that programmers
               | working on the same problem -- even in different
               | environments -- tend to produce roughly the same set of
               | bugs.
               | 
               | The conclusion of that study was that parallel
               | development mainly accomplishes a false sense of
               | security, and most of the additional reliability in those
               | projects came from other sound engineering techniques.
               | But I have lost the reference, so I don't know how much
               | credibility to lend my memory.
        
               | Dah00n wrote:
               | Isn't this exactly what aeroplanes do? Two or more
               | control systems made in different hardware, etc?
        
               | kqr wrote:
               | I'm not saying people aren't doing it! I'm just not sure
               | it has the intended effect.
               | 
               | (Also to protect against physical failures it works,
               | because physical failures are more independent than
               | software ones, as far as I understand.)
        
               | gumby wrote:
               | That was the reason for the different programming
               | paradigms (Algol-like vs Prolog-like), to reduce the
               | probability.
        
               | fanf2 wrote:
               | After some searchengineering I found Knight and Leveson
               | (1986) "AN EXPERIMENTAL EVALUATION OF THE ASSUMPTION OF
               | INDEPENDENCE IN MULTI-VERSION PROGRAMMING" which my
               | memory tells me us the classic paper on common failure
               | modes in reliability via N-version software which I was
               | taught about in my undergrad degree
               | http://sunnyday.mit.edu/papers.html#ft
               | 
               | Leveson also wrote the report on Therac 25.
        
         | jojobas wrote:
         | Aren't high energy space particles a pain in a way that the
         | more shielding you have, the more secondary radiation you
         | generate?
        
           | lazide wrote:
           | It depends on the type of shielding. For gamma radiation,
           | lead only is a definite problem this way. As is neutron and
           | high speed charged particles/cosmic rays.
           | 
           | Water less so.
        
         | kens wrote:
         | It's always been an option to use shielding rather than rad-
         | hard chips or in combination. RCA's SCP-234 aerospace computer
         | weighed 7.9 pounds, plus 3 pounds of lead sheets to protect the
         | RAM and ROM. The Galileo probe used sheets of tungsten to
         | protect the probe relay receiver processor, while the Galileo
         | plasma instrument used tantalum shielding. (I was just doing
         | some research on radiation shielding.)
        
         | adgjlsfhk1 wrote:
         | one thing worth remembering is that a bigger computer runs into
         | a lot more radiation. the cortex m0 is about .03mm^2 vs about
         | .2m^2 for the Apollo guidance computer. as such, the m0 will
         | see about 6 million times less radiation.
        
           | bpye wrote:
           | Aren't the smaller transistors going to be more susceptible
           | to damage and bit flips though?
        
       | kens wrote:
       | > Apollo 11 spacecraft contains 4 computers
       | 
       | Analog computers don't get the respect they deserve. There's one
       | more computer, the FCC. The Flight Control Computer is an analog
       | computer in the Saturn V that controlled the rocket gimbals. It's
       | a two-foot cylinder weighing almost 100 pounds.
        
         | nothercastle wrote:
         | Exactly this. A lot of the systems had built in analog
         | computers. It's a lot cheaper to build then now with
         | electronics but you need more computing power to do things that
         | were previously done mechanically
        
           | hinkley wrote:
           | Analog computers have to be rebuilt if it turns out the
           | program is wrong though, don't they?
        
             | nothercastle wrote:
             | Yes though they tend to be mechanically tuned. So like a
             | pneumatic computer or will get tuned to operate into some
             | range of inputs and you probably bench prototype it before
             | you mass produce it
        
             | coder543 wrote:
             | In the context of this thread, I believe even a digital
             | computer would have to be rebuilt if the program is
             | wrong... :P
             | 
             | Unless you typically salvage digital computers from the
             | wreckage of a failed rocket test and stick it in the next
             | prototype. If the FCC is wrong, kaboom.
        
               | scaredginger wrote:
               | I'm pretty sure you can perform tests and find defects
               | without actually flying the rocket
        
               | Tommstein wrote:
               | Presumably they meant a program being discovered to be
               | wrong before the computer was actually launched. And
               | meant literally building a whole new computer, not just
               | recompiling a program.
        
               | hinkley wrote:
               | Yeah, though to be fair, some of the programs Apollo ran
               | were on hand woven ROMs, so I may be making too fine a
               | distinction. The program itself was built, not compiled.
               | It if we are comparing with today, it would just be
               | installed, not constructed.
        
               | KMag wrote:
               | For the Apollo Guidance Computer, changing the program
               | meant manually re-weaving wires through or around tiny
               | magnet rings. A good part of the cost of the computer was
               | the time spent painstakingly weaving the wires to store
               | the program.
        
               | neodypsis wrote:
               | There's a very nice video about the assembly lines MIT
               | made just for building the Apollo computer [0].
               | 
               | 0. https://www.youtube.com/watch?v=ndvmFlg1WmE
        
               | denton-scratch wrote:
               | Pardon me, but why would you have to re-weave wires
               | around magnetic rings? The magnetic rings are for storing
               | data; the whole point is that you can change the data
               | without rewiring the memory. If you have to re-wire
               | permanent storage (e.g. program storage), that's
               | equivalent to creating a mask ROM, which is basically
               | just two funny-shaped sheets of conductor. There's no
               | need for magnetic rings.
        
               | KMag wrote:
               | No, I'm not talking about magnetic core memory. Core rope
               | memory also used little magnetic rings.
               | 
               | https://en.wikipedia.org/wiki/Core_rope_memory input
               | wires were energized, and they were coupled (or not) to
               | the output wires depending on if they shared a magnetic
               | ring (or not).
        
               | denton-scratch wrote:
               | Thanks! I'd never heard of rope memory.
        
               | garaetjjte wrote:
               | https://www.youtube.com/watch?v=hckwxq8rnr0
        
               | Anarch157a wrote:
               | Only if the bug was caught after the computer had been
               | assembled for the mission. For development, they used a
               | simulator. Basically, a cable connected to a mainframe,
               | with the bigger computer simulating the signals a bundle
               | of core rope would produce.
        
               | nothercastle wrote:
               | I had assumed it meant more simple things like balanced
               | balancing pneumatic or mechanical components that always
               | put you at the the correct ratio sort of like a
               | carburetor vs fuel injection.
        
             | gumby wrote:
             | Typically they were reprogrammed by changing the jumpers.
             | The analogous digital change would be replacing the band of
             | cards in a jaquard loom.
             | 
             | Much less than "rebuilding".
             | 
             | There have been some hybrids too.
        
         | KRAKRISMOTT wrote:
         | You forgot the most important one, the human computers at
         | ground control.
         | 
         | Who said women can't do math?
         | 
         | https://www.smithsonianmag.com/science-nature/history-human-...
        
           | mulmen wrote:
           | > Who said women can't do math?
           | 
           | The straw man?
        
             | legostormtroopr wrote:
             | Straw person?
        
           | asylteltine wrote:
           | > Who said women can't do math?
           | 
           | Nobody
        
             | shermantanktop wrote:
             | A quick search will show you many many examples that say
             | otherwise.
             | 
             | https://www.dailymail.co.uk/news/article-524390/The-women-
             | ad...
             | 
             | Granted, that same search will show you many examples of
             | content accusing unnamed other people of having this
             | attitude.
             | 
             | https://www.science.org/content/article/both-genders-
             | think-w...
             | 
             | It's an antiquated notion in my mind, but I don't think it
             | is a thing of the past.
        
               | wongarsu wrote:
               | "women can't do basic math" being an antiquated notion is
               | even weirder. As GP pointed out, companies used to employ
               | lots of predominantly female computers. As a consequence,
               | the first programmers and operators of digital computers
               | were also predominantly women, even if the engineers
               | building them were mostly men.
               | 
               | Women being bad at advanced math would make sense as an
               | antiquated notion, but those in charge of hiring
               | decisions until about 50 years ago evidently thought
               | women were great at basic math.
               | 
               | The study you linked showing that women lag behind men in
               | math to a degree proportional to some gender disparity
               | metric is also interesting, but doesn't really tell us
               | how we got here.
        
             | creatonez wrote:
             | No one says it, but our implicit biases do -
             | https://ilaba.wordpress.com/2013/02/09/gender-
             | bias-101-for-m...
        
           | interfixus wrote:
           | > _Who said women can 't do math?_
           | 
           | Asimov.
           | 
           | https://literature.stackexchange.com/questions/25852/where-d.
           | ..
        
         | kragen wrote:
         | i think it's (unintentionally) misleading to describe analog
         | 'computers' as 'computers'. what distinguishes digital
         | computers from other digital hardware is that they're turing-
         | complete (if given access to enough memory), and there isn't
         | any similar notion in the analog domain
         | 
         | the only reason they have the same name is that they were both
         | originally built to replace people cranking out calculations on
         | mechanical desk calculators, who were also called 'computers'
         | 
         | the flight control 'computer' has more in common with an analog
         | synthesizer module than it does with a cray-1, the agc, an
         | arduino, this laptop, or these chargers, which are by
         | comparison almost indistinguishable
        
           | ezconnect wrote:
           | They both do the same thing compute an output from given
           | inputs. So they are properly distinguished from each other on
           | how they do the computing. They both deserve the name
           | 'computer'.
        
             | kragen wrote:
             | only in the same sense that a machinist's micrometer, an
             | optical telescope, an analog television set, an acoustic
             | guitar, a letterpress printing press, a car's manual
             | transmission, a fountain pen, a nomogram, and a transistor
             | also 'compute an output from given inputs'
             | 
             | do you want to call them all 'computers' now?
        
               | justinjlynn wrote:
               | What's wrong with that? They are. We can always make the
               | finer distinction of "Von Neumann architecture inspired
               | digital electronic computer" if you wish to exclude the
               | examples you've given. After all, anything which
               | transforms a particular input to a particular output in a
               | consistent fashion could be considered a computer which
               | implements a particular function. I would say - don't
               | confuse the word's meaning with the object's function and
               | simply choose a context in which a word refers to a
               | particular meaning, adapt to others contexts and
               | translate, and simply deal with the fact that there is no
               | firm division between computer and not-computer out in
               | the word somewhere apart from people and their context-
               | rich communications. If the context in which you're
               | operating with an interlocutor is clear enough for you to
               | jump to a correction of usage ... simply don't; beyond
               | verifying your translation is correct, of course. As
               | you're already doing this - likely without realising it -
               | by taking care in doing so consciously you're likely to
               | find your communications more efficient, congenial, and
               | illuminating than they otherwise would be.
        
               | shermantanktop wrote:
               | This is the double-edged sword of deciding to widen (or
               | narrow) the meaning of a term which already has a
               | conventional meaning.
               | 
               | By doing so, you get to make a point--perhaps via
               | analogy, perhaps via precision, perhaps via pedantry--
               | which is illuminating for you but now confusing for your
               | reader. And to explain yourself, you must swim upstream
               | and redefine a term while simultaneously making a
               | different point altogether.
               | 
               | Much has been written about jargon, but a primary benefit
               | of jargon is the chance to create a domain-specific
               | meaning without the baggage of dictionary-correct
               | associations. It's also why geeks can be bores at dinner
               | parties.
        
               | derefr wrote:
               | We live in a society (of letters.) Communication is not
               | pairwise in a vacuum; all communication is in context of
               | the cultural zeitgeist in which it occurs, and by
               | intentionally choosing to use a non-zeitgeist-central
               | definition of a term, you are wasting the time of anyone
               | who talks to you.
               | 
               | By analogy to HCI: words are affordances. Affordances
               | exist because of familiarity. Don't make a doorknob that
               | you push on, and expect people not to write in telling
               | you to use a door-bar on that door instead.
        
               | atoav wrote:
               | You are not wrong, yet you are. All of these things are
               | doing computation in a vague, creative sense -- sure. But
               | if we call everything that does this or its equivalent a
               | _computer_ we would have to find new words for the thing
               | we mean to be a computer currently.
               | 
               | Unilaterally changing language is not forbidden, but if
               | _The Culture Wars(tm)_ has thought us anything, it is
               | that people are allergic to _talking_ about what they see
               | as mandated changes to their language, even if it is
               | reasonable and you can explain it.
               | 
               | Colour me stoked, but you could still just do it
               | unilaterally and wait till somebody notices.
               | 
               | However my caveat with _viewing everything as
               | computation_ is that you fall into the same trap as
               | people in the ~1850s did when they wanted to describe
               | everything in the world using complex mechanical devices,
               | because that was the bleeding edge back then. Not
               | everything is an intricate system of pulleys and levers
               | it turned out, even if theoretically you could mimic
               | everything if that system was just complex enough.
        
               | adrian_b wrote:
               | The arithmetic circuits alone, like adders, multipliers
               | etc., regardless if they are mechanical or electronic,
               | analog or digital, should not be called computers.
               | 
               | When the arithmetic circuits, i.e. the "central
               | arithmetical part", as called by von Neumann, are coupled
               | with a "central control part", as called by von Neumann,
               | i.e. with a sequencer that is connected in a feedback
               | loop with the arithmetic part, so that the computation
               | results can modify the sequence of computations, then
               | this device must be named as a "computer", regardless
               | whether the computations are done with analog circuits or
               | with digital circuits.
               | 
               | What defines a computer (according to the definition
               | already given by von Neumann, which is the right
               | definition in my opinion) is closing the feedback loop
               | between the arithmetic part and the control part, which
               | raises the order of the system in comparison with a
               | simple finite state automaton, not how those parts are
               | implemented.
               | 
               | The control part must be discrete, i.e. digital, but the
               | arithmetic part can be completely analog. Closing the
               | feedback loop, i.e. the conditional jumps executed by the
               | control part, can be done with analog comparators that
               | provide the predicates tested by the conditional jumps.
               | The state of an analog arithmetic part uses capacitors,
               | inductors or analog integrators, instead of digital
               | registers.
               | 
               | Several decades ago, I had to debug an analog computer
               | during its installation process, before functioning for
               | the first time. That was in a metallurgic plant, and the
               | analog computer provided outputs that controlled the
               | torques of a group of multi-megawatt DC electric motors.
               | The formulae used in the analog computations were very
               | complex, with a large number of adders, multipliers,
               | integrators, square root circuits and so on, which
               | combined inputs from many sensors.
               | 
               | That analog computer (made with op amps) performed a
               | sequence of computations much more complex than the
               | algorithms that were executed on an Intel 8080, which
               | controlled various on-off execution elements of the
               | system, like relays and hydraulic valves and the
               | induction motors that powered some pumps.
               | 
               | The main reason why such analog computers have become
               | obsolete is the difficulty of ensuring that the accuracy
               | of their computations will not change due to aging and
               | due to temperature variations. Making analog computers
               | that are insensitive to aging and temperature raises
               | their cost much above modern digital microcontrollers.
        
               | kragen wrote:
               | as you are of course aware, analog 'computers' do not
               | have the 'central control part' that you are arguing
               | distinguishes 'computers' from 'arithmetic circuits
               | alone'; the choice of which calculation to perform is
               | determined by how the computer is built, or how its
               | plugboard is wired. integrators in particular do have
               | state that changes over time, so the output at a given
               | time is not a function of the input at only that time,
               | but of the entire past, and as is well known, such a
               | system can have extremely complex behavior (sometimes
               | called 'chaos', though in this context that term is
               | likely to give rise to misunderstanding)
               | 
               | you can even include multiplexors in your analog
               | 'computer', even with only adders and multipliers and
               | constants; _x_ * (1 + -1 * _y_ ) + _z_ * _y_ interpolates
               | between _x_ and _z_ under the control of _y_ , so that
               | its output is conditionally either _x_ or _z_ (or some
               | intermediate state). but once you start including
               | feedback to push _y_ out of that intermediate zone, you
               | 've built a flip-flop, and you're well on your way to
               | building a _digital_ control unit (one you could probably
               | build more easily out of transistors rather than op-
               | amps). and surely before long you can call it a digital
               | computer, though one that is controlling precision linear
               | analog circuitry
               | 
               | it is very commonly the case that analog computation is
               | much, much faster than digital computation; even today,
               | with microprocessors a hundred thousand times faster than
               | an 8080 and fpgas that are faster still, if you're doing
               | submillimeter computation you're going to have to do your
               | front-end filtering, upconversion or downconversion, and
               | probably even detection in the analog domain
        
               | adrian_b wrote:
               | Most "analog computers" have been simple, and even if
               | they usually provided the solution of a system of
               | ordinary differential equations, that does not require a
               | control part, making them no more closer to a complete
               | computer than a music box that performs a fixed sequence.
               | 
               | I agree that this kind of "analog computers" does not
               | deserve the name of "computer", because they are
               | equivalent only with the "registers + ALU" (RALU) simple
               | automaton that is a component of a CPU.
               | 
               | Nevertheless, there is no reason why a digital control
               | part cannot be coupled with an analog arithmetic part and
               | there have existed such "analog computers", even if they
               | have been rarely used, due to high cost and complexity.
               | 
               | It is not completely unlikely that such "analog
               | computers", consisting of a digital control part and an
               | analog arithmetic part, could be revived with the purpose
               | of implementing low-resolution high-speed machine
               | learning inference.
               | 
               | Even now, in circuits like analog-digital converters,
               | there may be analog computing circuits, like switched-
               | capacitor filters, which are reconfigurable by the
               | digital controller of the ADC, based on various criteria,
               | which may depend on the digital output of the converter
               | or on the outputs of some analog comparators (which may
               | detect e.g. the range of the input).
        
               | kragen wrote:
               | i agree completely; thank you for clarifying despite my
               | perhaps confrontational tone
               | 
               | in some sense almost any circuit in which a digital
               | computer controls an analog multiplexer chip or a so-
               | called digital potentiometer could qualify. and cypress's
               | psoc line has a bit of analog circuitry that can be thus
               | digitally reconfigured
        
               | kens wrote:
               | You're describing a "hybrid computer". These were
               | introduced in the late 1950s, combining a digital
               | processor with analog computing units. I don't understand
               | why you and kragen want to redefine standard terms; this
               | seems like a pointless linguistic exercise.
        
               | kragen wrote:
               | because 'computer' has a meaning now that it didn't have
               | 65 years ago, and people are continuously getting
               | confused by thinking that 'analog computers' are
               | computers, as they understand the term 'computers', which
               | they aren't; they're a different thing that happens to
               | have the same name due to a historical accident of how
               | the advent of the algorithm happened
               | 
               | this is sort of like how biologists try to convince
               | people to stop calling jellyfish 'jellyfish' and starfish
               | 'starfish' because they aren't fish. the difference is
               | that it's unlikely that someone will get confused about
               | what a jellyfish is because they have so much information
               | about jellyfish already
               | 
               | my quest to get people to call cellphones 'hand
               | computers' is motivated by the same values but is
               | probably much more doomed
        
               | adrian_b wrote:
               | "Hybrid computer" cannot be considered as a standard
               | term, because it has been used ambiguously in the past.
               | 
               | Sometimes it has been applied to the kind of computers
               | mentioned by me, with a digital control part and a
               | completely analog arithmetic part.
               | 
               | However it has also been frequently used to describe what
               | were hybrid arithmetic parts, e.g. which included both
               | digital registers and digital adders and an analog
               | section, for instance with analog integrators, which was
               | used to implement signal processing filters or solving
               | differential equations.
               | 
               | IMO, "hybrid computer" is appropriate only in the second
               | sense, for hybrid arithmetic parts.
               | 
               | The control part of a CPU can be based only on a finite
               | state automaton, so there is no need for any term to
               | communicate this.
               | 
               | On the other hand, the arithmetic part can be digital,
               | analog or hybrid, so it is useful to speak about digital
               | computers, analog computers and hybrid computers, based
               | on that.
        
           | thriftwy wrote:
           | I wonder why you can't make a turing complete analog computer
           | using feedback loops.
        
             | progval wrote:
             | You can: https://en.wikipedia.org/wiki/General_purpose_anal
             | og_compute...
             | 
             | There is still active research in the area, eg. https://www
             | .lix.polytechnique.fr/~bournez/i.php?n=Main.Publi...
        
               | kragen wrote:
               | a universal turing machine is a particular machine which
               | can simulate all other turing machines. the gpac, by
               | contrast, is a _family_ of machines: all machines built
               | out of such-and-such a set of parts
               | 
               | you can't simulate an 11-integrator general-purpose
               | analog computer or other differential analyzer with a
               | 10-integrator differential analyzer, and you can't
               | simulate a differential analyzer with 0.1% error on a
               | (more typical) differential analyzer with 1% error,
               | unless it's 100x as large (assuming the error is
               | gaussian)
               | 
               | the ongoing research in the area is of course very
               | interesting but a lot of it relies on an abstraction of
               | the actual differential-analyzer problem in which
               | precision is infinite and error is zero
        
               | thriftwy wrote:
               | Sure, you cannot easily simulate another analog computer,
               | but this is not the requirement. The requirement is
               | turing completeness, which can be done.
        
               | lisper wrote:
               | It can? How?
        
               | kragen wrote:
               | as i understand it, with infinite precision; the real
               | numbers within some range, say -15 volts to +15 volts,
               | have a bijection to infinite strings of bits (some
               | infinitesimally small fraction of which are all zeroes
               | after a finite count). with things like the logistic map
               | you can amplify arbitrarily small differences into
               | totally different system trajectories; usually when we
               | plot bifurcation diagrams from the logistic map we do it
               | in discrete time, but that is not necessary if you have
               | enough continuous state variables (three is obviously
               | sufficient but i think you can do it with two)
               | 
               | given these hypothetical abilities, you can of course
               | simulate a two-counter machine, but a bigger question is
               | whether you can compute anything a turing machine cannot;
               | after all, in a sense you are doing an infinite amount of
               | computation in every finite interval of time, so maybe
               | you could do things like compute whether a turing machine
               | will halt in finite time. so far the results seem to
               | support the contrary hypothesis, that extending
               | computation into continuous time and continuously
               | variable quantities in this way does not actually grant
               | you any additional computational power!
               | 
               | this is all very interesting but obviously not a useful
               | description of analog computation devices that are
               | actually physically realizable by any technology we can
               | now imagine
        
               | fanf2 wrote:
               | Except that infinite precision requires infinite settling
               | time. (Which I guess is the analogue computing version of
               | arithmetic not being O(1) even though it is usually
               | modelled that way.)
        
               | couchand wrote:
               | Infinite precision is just analog's version of an
               | infinitely-long tape.
        
               | lisper wrote:
               | But even with infinite precision, how do you build a
               | universal analog computer? And how do you program it?
        
               | fanf2 wrote:
               | Right :-) I prefer to say a Turing machine's tape is
               | "unbounded" rather than "infinite" because it supports
               | calculations of arbitrarily large but finite size. So in
               | the analogue case, I would say unbounded precision and
               | unbounded waiting time.
        
               | Dylan16807 wrote:
               | Infinite precision is exponentially more difficult. It's
               | very easy to have unbounded tape, with a big buffer of
               | tape and some kind of "factory" that makes more tape when
               | you get near the edge. Unbounded precision? Not going to
               | happen in a real machine. You get to have several digits
               | and no more.
        
               | lisper wrote:
               | It's much worse than that. There is this little thing
               | called Planck's constant, and there's this other little
               | thing called the second law of thermodynamics. So even if
               | you allow yourself arbitrarily advanced technology I
               | don't see how you're doing to make it work without new
               | physics.
        
               | thriftwy wrote:
               | It would not be very interesting. You will lose all the
               | interesting properties of analog computer and it would be
               | a poorly performing turing machine. Still it would have
               | those loops and branches necessary, since you can always
               | build digital on top of analog with some additional
               | harness.
        
           | eesmith wrote:
           | What do you regard as the first digital, Turing-complete (if
           | given enough memory) computer?
           | 
           | ENIAC, for example, was not a stored-program computer.
           | Reprogramming required rewiring the machine.
           | 
           | On the other hand, by clever use of arithmetic calculations, 
           | https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37..
           | .. says the Z3 could perform as a Universal Computer, even
           | though, quoting its Wikipedia page, "because it lacked
           | conditional branching, the Z3 only meets this definition by
           | speculatively computing all possible outcomes of a
           | calculation."
           | 
           | Which makes me think the old punched card mechanical
           | tabulators could also be rigged up as a universal machine,
           | were someone clever enough.
           | 
           | "Surprisingly Turing-Complete" or "Accidentally Turing
           | Complete" is a thing, after all, and
           | https://gwern.net/turing-complete includes a bunch of them.
        
             | kragen wrote:
             | let me preface this with the disclaimer that i am far from
             | an expert on the topic
             | 
             | probably numerous eddies in natural turbulent fluid flows
             | have been digital turing-complete computers, given what we
             | know now about the complexity of turbulence and the
             | potential simplicity of turing-complete behavior. but is
             | there an objective, rather than subjective, way to define
             | this? how complicated are our input-preparation and output-
             | interpretation procedures allowed to be? if there is no
             | limit, then any stone or grain of sand will appear to be
             | turing-complete
             | 
             | a quibble: the eniac was eventually augmented to support
             | stored-program operation but not, as i understand it, until
             | after the ias machine (the johnniac) was already
             | operational
             | 
             | another interesting question there is how much human
             | intervention we permit; the ias machine and the eniac were
             | constantly breaking down and requiring repairs, after all,
             | and wouldn't have been capable of much computation without
             | constant human attention. suppose we find that there is a
             | particular traditional card game in which players can use
             | arbitrarily large numbers. if the players decide to
             | simulate minsky's two-counter machine, surely the players
             | are turing-complete; is the game? are the previous games
             | also turing-complete, the ones where they did not make that
             | decision? does it matter if there happens to be a
             | particular state of the cards which obligates them to
             | simulate a two-counter machine?
             | 
             | if instead of attempting to measure the historical internal
             | computational capability of systems that the humans could
             | not perceive at the time, such as thunderstorms and the z3,
             | we use the subjective standard of what people actually
             | programmed to perform universal computation, then the ias
             | machine or one of its contemporaries was the first turing-
             | complete computer (if given enough memory); that's when
             | universal computation first made its effects on human
             | society felt
        
               | eesmith wrote:
               | > is the game?
               | 
               | Sure. One of the "Surprisingly Turing-Complete" examples
               | is that "Magic: the Gathering: not just TC, but above
               | arithmetic in the hierarchy ".
               | 
               | See https://arxiv.org/abs/1904.09828 for the preprint
               | "Magic: The Gathering is Turing Complete",
               | https://arstechnica.com/science/2019/06/its-possible-to-
               | buil... for an Ars Technica article, and
               | https://hn.algolia.com/?q=magic+turing for the many HN
               | submissions on that result.
        
               | _a_a_a_ wrote:
               | Could you explain "above arithmetic in the hierarchy" in
               | a few words, TIA, never heard of this
        
               | eesmith wrote:
               | Nope. The source page links to
               | https://en.wikipedia.org/wiki/Arithmetical_hierarchy . I
               | can't figure it out.
               | 
               | An HN comment search, https://hn.algolia.com/?dateRange=a
               | ll&page=0&prefix=false&qu... , finds a few more lay
               | examples, with
               | https://news.ycombinator.com/item?id=21210043 by
               | dwohnitmok being the easiest for me to somewhat make
               | sense of.
               | 
               | I think the idea is, suppose you have an oracle which
               | tells you if a Turning machine will halt, in finite time.
               | There will still halting problems for that oracle system,
               | which requires an oracle from a higher level system.
               | (That is how I interpret "an oracle that magically gives
               | you the answer to the halting problem for a lower number
               | of interleavings will have its own halting problem it
               | cannot decide in higher numbers of interleavings").
        
               | BoiledCabbage wrote:
               | This appears to discuss it a bit more. Not certain if
               | it's more helpful than the comment (still going through
               | it), but it does cover it more in detail.
               | 
               | https://risingentropy.com/the-arithmetic-hierarchy-and-
               | compu...
        
               | kragen wrote:
               | mtg is more recent than things like the johnniac, but
               | plausibly there's a traditional card game with the same
               | criteria that predates electronic computation
               | 
               | but then we have to ask thorny ontological questions:
               | does a card game count if it requires a particular
               | configuration to be turing-complete, but nobody ever
               | played it in that configuration? what if nobody ever
               | played the game at all? what if nobody even knew the
               | rules?
        
             | midasuni wrote:
             | Colossus was before eniac, but was also programmed by
             | plugging up. Baby was a stored program machine and had
             | branching
        
           | milkey_mouse wrote:
           | https://www.gleech.org/first-computers
        
             | deaddodo wrote:
             | If you just want to reference the entire data set of
             | potential "first computers" (including ones that aren't
             | referable in the javaScript app, due to missing toggles),
             | you can access the source data here (warning, small CSV
             | download):
             | 
             | https://www.gleech.org/files/computers.csv
        
           | eru wrote:
           | Most digital computers are Turing complete, but interestingly
           | not all programming languages are Turing complete.
           | 
           | Turing completeness is a tar pit that makes your code hard to
           | analyse and optimise. It's an interesting challenge to find
           | languages that allow meaningful and useful computation that
           | are not Turing complete. Regular expressions and SQL-style
           | relational algebra (but not Perl-style regular expressions
           | nor most real-world SQL dialects) are examples familiar to
           | many programmers.
           | 
           | Programming languages like Agda and Idris that require that
           | you prove that your programs terminate [0] are another
           | interesting example, less familiar to people.
           | 
           | [0] It's slightly more sophisticated than this: you can also
           | write event-loops that go on forever, but you have to prove
           | that your program does some new IO after a finite amount of
           | time. (Everything oversimplified here.)
        
             | kragen wrote:
             | yes, total functional programming is an interesting
             | research area, and of course almost all of our subroutines
             | are intended to verifiably terminate
        
           | nine_k wrote:
           | In analog computers, software is hard to separate from the
           | hardware. In ones I had any experience with (as a part of a
           | university course), the programming part was wiring things on
           | patch panels, not unlike how you do it with modular analog
           | synths. You could make the same machine run a really wide
           | variety of analog calculations by connecting opamps and
           | passive components in various ways.
           | 
           | If we could optimize a set of programs down to the FPGA
           | bitstream or even Verilog level, that would approach the kind
           | of programs analog computers run.
           | 
           | I can't tell anything about Turing completeness though. It's
           | a fully discrete concept, and analog computers operate in the
           | continuous signal domain.
        
         | creatonez wrote:
         | Some info about the Flight Control Computer:
         | 
         | > The Flight Control Computer (FCC) was an entirely analog
         | signal processing device, using relays controlled by the Saturn
         | V Switch Selector Unit to manage internal redundancy and filter
         | bank selection. The FCC contained multiple redundant signal
         | processing paths in a triplex configuration that could switch
         | to a standby channel in the event of a primary channel
         | comparison failure. The flight control computer implemented
         | basic proportional-derivative feedback for thrust vector
         | control during powered flight, and also contained phase plane
         | logic for control of the S-IVB auxiliary propulsion system
         | (APS).
         | 
         | > For powered flight, the FCC implemented the control law $
         | \beta_c = a_0 H_0(s) \theta_e + a_1 H_1(s) \dot{\theta} $ where
         | $ a_0 $ and $ a_1 $ are the proportional and derivative gains,
         | and $ H_0(s) $ are the continuous-time transfer functions of
         | the attitude and attitude rate channel structural bending
         | filters, respectively. In the Saturn V configuration, the gains
         | $ a_0 $ and $ a_1 $ were not scheduled; a discrete gain switch
         | occurred. The Saturn V FCC also implemented an electronic
         | thrust vector cant functionality using a ramp generator that
         | vectored the S-IC engines outboard approximately 2 degrees
         | beginning at 20 seconds following liftoff, in order to mitigate
         | thrust vector misalignment sensitivity.
         | 
         | https://ntrs.nasa.gov/api/citations/20200002830/downloads/20...
        
       | ashvardanian wrote:
       | Remarkable comparison! I'm surprised it had only one parity bit
       | per 15-bit word. Even on Earth today we keep two parity bits per
       | 8-bit word in most of our servers.
       | 
       | > IBM estimated in 1996 that one error per month per 256 MiB of
       | RAM was expected for a desktop computer.
       | 
       | https://web.archive.org/web/20111202020146/https://www.newsc...
        
       | ssgodderidge wrote:
       | > The Anker PowerPort Atom PD 2 USB-C Wall Charger CPU is 563
       | times faster than the Apollo 11 Guidance Computer
       | 
       | Wild to think the thing that charges my devices could be
       | programmed to put a human on the moon
        
         | oldgradstudent wrote:
         | > Wild to think the thing that charges my devices could be
         | programmed to put a human on the moon
         | 
         | With a large enough lithium battery, a charger can easily take
         | you part of the way there.
        
         | fuzzfactor wrote:
         | The proven way to fly people to the moon and back using such
         | low-powered computers was to have a supporting cast of
         | thousands who were naturally well qualified using their
         | personal slide rules to smoothly accomplish things that many of
         | today's engineers would stumble over using their personal
         | computers.
         | 
         | Plenty of engineers on the ground had no computers, and the
         | privileged ones who did had mainframes, not personal at all.
         | 
         | A computer was too valuable to be employed doing anything that
         | didn't absolutely _need_ a computer, most useful for precision
         | or speed of calculation.
         | 
         | But look what happens when you give something like a mainframe
         | to somebody who is naturally good at aerospace when using a
         | slide rule to begin with.
        
         | Someone wrote:
         | From that data point, we don't know for sure. The Apollo
         | Guidance Computer was programmed to put a human on the moon,
         | but never used to actually do it, so no computer ever "put a
         | human on the moon". All landings used "fly by wire", with an
         | astronaut at the stick, and the thrusters controlled by
         | software.
         | 
         | https://www.quora.com/Could-the-Apollo-Guidance-Computer-
         | hav...:
         | 
         |  _"P64. At about 7,000 feet altitude (a point known as "high
         | gate"), the computer switched automatically to P64. The
         | computer was still doing all the flying, and steered the LM
         | toward its landing target. However, the Commander could look at
         | the landing site, and if he didn't like it, could pick a
         | different target and the computer would alter its course and
         | steer toward that target.
         | 
         | At this point, they were to use one of three programs to
         | complete the landing:
         | 
         | P66. This was the program that was actually used for all six
         | lunar landings. A few hundred feet above the surface the
         | Commander told the computer to switch to P66. This is what was
         | commonly known as "manual mode", although it wasn't really. In
         | this mode, the Commander steered the LM by telling the computer
         | what he wanted to do, and the computer made it happen. This
         | continued through landing.
         | 
         | P65. Here's the automatic mode you asked about. If the computer
         | remained in P64 until it was about 150 feet above the surface,
         | then the computer automatically switched to P65, which took the
         | LM all the way to the surface under computer control. The
         | problem is that the computer had no way to look for obstacles
         | or tell how level its target landing site was. On every flight,
         | the Commander wanted to choose a different spot than where the
         | computer was taking the LM, and so the Commander switched to
         | P66 before the computer automatically switched to P65. [Update:
         | The code for P65 was removed from the AGC on later flights. The
         | programmers needed memory for additional code elsewhere, and
         | the AGC was so memory-constrained that adding code one place
         | meant removing something else. By that point it was obvious
         | that none of the crews was ever going to use the automatic
         | landing mode, so P65 was removed.]
         | 
         | P67. This is full-on honest-to-goodness manual mode. In P66,
         | even though the pilot is steering, the computer is still in the
         | loop. In P67, the computer is totally disengaged. It is still
         | providing data, such as altitude and descent rate, but has no
         | control over the vehicle."_
        
       | codezero wrote:
       | Weird question maybe, but does anyone keep track of quantitative
       | or qualitative data that measures the discrepancy between
       | consumer (commercial) and government computer technology?
       | 
       | TBH, it's kind of amazing that a custom computer from 50 years
       | ago has the specs of a common IC/SoC today, but those specs scale
       | with time.
        
         | ajsnigrutin wrote:
         | There is no difference anymore, the only difference is the
         | scale.
         | 
         | Back then, consumers got nothing, governments got largge
         | computers (room sized+), then consumers got microcomputers
         | (desktop sized), governments got larger mainframes, consumers
         | got PCs, government got big-box supercomputers,...
         | 
         | And now? Consumers get x86_64 servers _, governments get x86_64
         | servers, and the only difference is how much money you have,
         | how many servers can you buy and how much space, energy and
         | cooling you need to run them.
         | 
         | _ well,  "normal users" get laptops and smartphones, but geek-
         | consumers buy servers... and yeah, I know arm is an
         | alternative.
        
           | codezero wrote:
           | I was asking about anyone tracking the disparity between
           | nation-state computing power and commercially available
           | computing power. This seems like something that's
           | uncontroversial.
        
             | creer wrote:
             | "Nation state" doesn't mean "country". It certainly doesn't
             | mean "rich country".
        
           | tavavex wrote:
           | I'd argue that the difference is the price. There is still
           | quite a bit of a difference between average consumer and
           | business hardware, but compute power is cheap enough that the
           | average person can afford what was previously only reserved
           | for large companies. The average "consumer computer" nowadays
           | is an ARM smartphone, and while server equipment is
           | purchasable, you can't exactly hit up your local electronics
           | store to buy a server rack or a server CPU. You can still get
           | those things quite easily, but I wouldn't say their main goal
           | is being sold to individuals.
        
           | creer wrote:
           | Let's not go too far either. Money and determination still
           | buys results. A disposable government system might be stuffed
           | with large FPGAs, ASICs and other exotics. Which would rarely
           | be found in any consumer system - certainly not in quantity.
           | A government system might pour a lot of money in the design
           | of these and the cost of each unit. So, perhaps not much
           | difference for each standard CPU and computer node but still
           | as much difference as ever in the rest?
        
         | c0pium wrote:
         | Why would you expect there to be one? It's all the same stuff
         | and has been for decades.
        
           | codezero wrote:
           | I expect nation state actors to have immensely more access to
           | computing power than the commercial sector, is that
           | controversial?
        
             | ghaff wrote:
             | Because the big ones spend more money. I expect Google etc.
             | has access to more computing power than most nation states
             | do.
        
             | ianburrell wrote:
             | If you look at the top supercomputers, US national labs
             | occupy most of the top 10. But they aren't enormously
             | larger than the others. They are also built of out of
             | standard parts. I'm surprised that they are recent, I
             | expected the government to be slow and behind.
             | 
             | What do you expect US government to do with lots of
             | computing power? I wouldn't expect military to need
             | supercomputers. Maybe the NSA would have a lot for cracking
             | something or surveillance. But the big tech companies have
             | more.
        
               | xcv123 wrote:
               | > They are also built of out of standard parts. I'm
               | surprised that they are recent, I expected the government
               | to be slow and behind.
               | 
               | Because the government isn't building them. They are Cray
               | supercomputers, supplied by HPE. Not entirely built with
               | standard parts. Proprietary interconnect and cooling
               | system.
        
         | bregma wrote:
         | By 'government' do you mean banks and large industrial concerns
         | (like the "industry" in "military-industrial complex")? The
         | latter are where all the big iron and massive compute power has
         | always been.
         | 
         | If course, from a certain point of view, they're many of the
         | same people and money.
        
       | jackhack wrote:
       | it's a fun article, but I would liked to have seen at least a
       | brief mention of power consumption comparison among the four
       | designs.
        
       | jcalvinowens wrote:
       | >> others point out that the LVDC actually contains triply-
       | redundant logic. The logic gives 3 answers and the voting
       | mechanism picks the winner.
       | 
       | This is a very minor point... but three of something isn't triple
       | redundancy: it's double redundancy. Two is single redundancy, one
       | is no redundancy.
       | 
       | Unless the voting mechanism can somehow produce a correct answer
       | from differing answers from all three implementations of the
       | logic, I don't understand how it could be considered triply
       | redundant. Is the voting mechanism itself functionally a fourth
       | implementation?
        
         | kens wrote:
         | The official name for the LVDC's logic is triple modular
         | redundant (TMR). The voting mechanism simply picks the
         | majority, so it can tolerate one failure. The LVDC is a serial
         | computer, which makes voting simpler to implement, since you're
         | only dealing with one bit at a time.
        
         | somat wrote:
         | I find it fascinating the two different schools of thought
         | exposed in the LVDC and the AGC.
         | 
         | The LVDC was a highly redundant can not fail design. the AGC
         | had no redundancy and was designed to recover quickly if
         | failure occurred.
        
       | SV_BubbleTime wrote:
       | >But it is another step toward increasing complexity.
       | 
       | I wish more people understood this, and could better see the
       | coming crisis.
        
       | dang wrote:
       | Discussed at the time (of the article):
       | 
       |  _Apollo 11 Guidance Computer vs. USB-C Chargers_ -
       | https://news.ycombinator.com/item?id=22254719 - Feb 2020 (205
       | comments)
        
       | daxfohl wrote:
       | So in 50 years the equivalent of a gpt4 training cluster from
       | today's datacenters will fit in a cheap cable, and it will run
       | over 100 times faster than a full cluster today.
        
         | FredPret wrote:
         | Computronium
        
         | ko27 wrote:
         | Yeap, that's how exponential growth works. It just never stops.
        
       | Tommstein wrote:
       | Too bad the link to Jonny Kim's biography is broken (one that
       | works: https://www.nasa.gov/people/jonny-kim/). He has to be one
       | of the most impressive humans who has ever lived. Amongst other
       | things, a decorated Navy SEAL, Harvard medical doctor, and
       | astronaut. Sounds like a kid slapping together the ultimate G.I.
       | Joe.
        
       | tavavex wrote:
       | I'm curious - are there any ways of finding out the precise
       | hardware that's used in these small-scale devices that are
       | generally not considered to be computers (like smartphone
       | chargers) without actually having to take them apart? Are there
       | special datasheets, or perhaps some documents for government
       | certification, or anything like it? I've always been fascinated
       | with the barebones, low-spec hardware that runs mundane
       | electronic things, so I want to know where the author got all
       | that information from.
        
         | AnotherGoodName wrote:
         | Generally no but taking them apart to look at the chips inside
         | is easy. Fwiw everything has a CPU in it these days. Actually
         | that's been true since the 70s. Your old 1970's keyboard had a
         | fully programmable CPU in it. Typically an Intel MCS-48 variant
         | https://en.wikipedia.org/wiki/Intel_MCS-48#Uses
         | 
         | Today it's even more the case. You have fully programmable CPUs
         | in your keyboard, trackpad, mouse, all usb devices, etc.
        
         | ArcticLandfall wrote:
         | Wireless devices go through an FCC certification process that
         | publishes teardowns. And iFixit posts.
        
       | Klaster_1 wrote:
       | Can you run Doom on a USB-C charger? Did anyone manage to?
        
         | yjftsjthsd-h wrote:
         | I feel like I/O would be the real pain point there. I suppose
         | if you throw out performance you could forward X/VNC/whatever
         | over serial (possibly with TCP/IP in the middle; SLIP is ugly
         | but _so_ flexible), but that 's unlikely to be playable.
        
       | somat wrote:
       | The ariicle was a lot of fun however I felt it missed a important
       | aspect about the respective computers. IO channels. I don't know
       | about the USB charge controllers. But the AGC as a flight
       | computer had a bunch of inputs and outputs. Does a Richtek RT7205
       | have enough IO?
        
         | jiggawatts wrote:
         | The most powerful chip in the list (Cypress CYPD4126) has 30x
         | general purpose I/O pins.[1]
         | 
         | AFAIK, this is typical of USB controller chips, which generally
         | have about 20-30 I/O pins, but I'm sure there are outliers.
         | 
         | The AGC seems to have four 16-bit input registers, and five
         | 16-bit output registers[2], for a total of 144 I/O pins total.
         | 
         | [1] https://ta.infinity-
         | component.com/datasheet/9c-CYPD4126-40LQ...
         | 
         | [2]
         | https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Other...
        
         | theon144 wrote:
         | I have no clue as to the I/O requirements of the AGC, but I
         | imagine that with ~500x the performance, a simple I/O expander
         | could fill the gap?
        
       | continuational wrote:
       | Seems like with cables this powerful, it might make sense for
       | some devices to simply run their logic on the cable CPU, instead
       | of coming with their own.
        
       | ReptileMan wrote:
       | The great thing about the AI age is that we are once again
       | performance constrained so people start to rediscover the lost
       | art of actually optimizing a program or runtime (the last such
       | age were the waning days of ps2. Those guys made GoW 2 run on 32
       | megs of ram ... respect)
        
       | blauditore wrote:
       | I'm a bit tired of all the sensationalist "look what landed on
       | the moon vs. today's hardware" comparisons. The first airplanes
       | didn't have any sort of computer on board, so computation power
       | is not the single deciding factor on the performance and success
       | of such an endeavor.
       | 
       | The software (and hardware) of the Apollo missions was very well-
       | engineered. We all know computation became ridiculously more
       | powerful in the meantime, but that wouldn't make it easy to do
       | the same nowadays. More performance doesn't render the need for
       | good engineering obsolete (even though some seem to heavily lean
       | on that premise).
        
         | BSDobelix wrote:
         | >first airplanes didn't have any sort of computer on board,
         | 
         | Sure they had and often still have, it's called wetware.
         | 
         | >so computation power is not the single deciding factor on the
         | performance and success of such an endeavor
         | 
         | The endeavor to charge a phone?
        
         | hnlmorg wrote:
         | I don't think you're reading these articles in the right spirit
         | if that's your take away from them.
         | 
         | What I find more interesting is to compare how complicated the
         | tech we don't think about has become. It's amazing that a
         | cable, not a smart device or even 80s digital watch, but a
         | literal cable, has as much technology packed into it as Apollo
         | 11 and we don't even notice.
         | 
         | Playing devils advocate for your comment, one of the
         | (admittedly many) reasons going to the moon is harder than
         | charging a USB device is because there are not off-the-shelf
         | parts for space travel. If you had to build your USB charger
         | from scratch (including defining the USB specification for the
         | first time) each time you needed to charge your phone, I bet
         | people would quickly talk about USB cables as a "hard problem"
         | too.
         | 
         | That is the biggest takeaway we should get from articles like
         | this. Not that Apollo 11 wasn't a hugely impressive feat of
         | engineering. But that there is an enormous amount of
         | engineering in our every day lives that is mass produced and we
         | don't even notice.
        
           | argiopetech wrote:
           | This is actually about the wall warts the cable could be
           | plugged into.
           | 
           | Otherwise, I completely agree.
        
           | nzach wrote:
           | Your comment reminded me of this[0] video about the Jerry
           | Can.
           | 
           | A simple looking object, but in reality it had a lot o tought
           | put in to get to this form.
           | 
           | It also goes along the lines of "Simplicity is
           | complicated"[1].
           | 
           | [0] - https://www.youtube.com/watch?v=XwUkbGHFAhs
           | 
           | [1] - https://go.dev/talks/2015/simplicity-is-
           | complicated.slide#1
        
         | bryancoxwell wrote:
         | > The software (and hardware) of the Apollo missions was very
         | well-engineered.
         | 
         | I think this is the whole point of articles like this. I don't
         | think it's sensationalist at all to compare older tech with
         | newer and discuss how engineers did more with less.
        
       | cubefox wrote:
       | If anyone is interested, there is a 1965 documentary about an
       | Apollo computer:
       | 
       | https://youtube.com/watch?v=ndvmFlg1WmE
        
       | Havoc wrote:
       | It has certainly felt like the limit is software/PEBKAC for a
       | long while. Until lately with LLMs...that does make me feel "wish
       | I had a bigger hammer" again.
        
       | denton-scratch wrote:
       | > the LVDC actually contains triply-redundant logic
       | 
       | I didn't know that was just for the LVDC.
       | 
       | > emulate this voting scheme with 3x microcontrollers with a 4th
       | to tally votes will not make the system any more reliable
       | 
       | I think that's clear enough; the vote-tallier becomes a SPOF. I'm
       | not sure how Tandem and Stratus handled discrepancies between
       | their (twin) processors. Stratus used a pair of OTC 68K
       | processors, which doesn't seem to mean voting; I can't see how
       | you'd resolve a disagreement between just two voters.
       | 
       | I can't see how you make a voting-based "reliable" processor from
       | OTC CPU chips; I imagine it would require each CPU to observe the
       | outputs of the other two, and tell itself to stop voting if it
       | loses a ballot. Which sounds to me like custom CPU hardware.
       | 
       | Any external hardware for comparing votes, telling a CPU to stop
       | voting, and routing the vote-winning output, amounts to a vote-
       | tallier, which is a SPOF. You could have three vote-talliers,
       | checking up on one-another; but then you'd need a vote-tallier-
       | tallier. It's turtles from then on down.
       | 
       | In general, having multiple CPUs voting as a way of improving
       | reliability seems fraught, because it increases complexity, which
       | reduces reliability.
       | 
       | Maybe making reliable processors amounts to just making
       | processors that you can rely on.
        
         | ryukoposting wrote:
         | > I can't see how you'd resolve a disagreement between just two
         | voters.
         | 
         | Tell them both to run the calculation again, perhaps?
        
       | simne wrote:
       | > The CYPD4225 is definitely not rated for space.. if it would
       | work in space
       | 
       | It will. Not too long and not very reliable, but will.
       | 
       | From history of space rockets, they was definitely first created
       | as "two purpose" (except, may Vanguard, which was totally
       | civilian program), so their electronics considered possible
       | radiation from nuclear war, but fortunately, in space found
       | natural radiation, but slightly other type (spectrum). Currently,
       | SpaceX just use industrial grade computers on rockets (not RAD
       | hardened).
       | 
       | Well, look at tech details: for RAD digital electronics exists
       | two types of problems.
       | 
       | 1. Random spikes (switches) from high energy charged particles.
       | Unfortunately, only Military RAD grade parts have integrated
       | safety mechanisms, for Civ/Industry grades, could make shield
       | with EM field and thick layer of protecting material, like Lead
       | or even Uranium. When thyristor effect happen, need to power
       | circle (turn off/on and boot), and this is risks source for
       | mission, but most probably, it will withstand flight to Moon.
       | 
       | 2. Aging of semiconductor structure from constant flow of
       | particles with high penetration - basically it is just high speed
       | diffusion, which destroy semiconductor structures. But for Earth-
       | Moon environment, this is issue for long term operations (months
       | or even years).
       | 
       | So, will work.
        
       ___________________________________________________________________
       (page generated 2023-12-27 23:01 UTC)