[HN Gopher] Sierra's Macintosh Timebomb (2021)
___________________________________________________________________
Sierra's Macintosh Timebomb (2021)
Author : zdw
Score : 175 points
Date : 2023-01-12 00:22 UTC (1 days ago)
(HTM) web link (www.benshoof.org)
(TXT) w3m dump (www.benshoof.org)
| lostlogin wrote:
| It took me way too long to realise that this related to the
| company, not the OS version. I even played those games.
|
| Coffee time.
| nebulous1 wrote:
| I remember Sierra games frequently developing timing issues as
| PCs got faster. They never seemed to get that one down.
| chrisdfrey wrote:
| A lot of their game logic times based on "game cycles" instead
| of something proper like counting seconds. ScummVM runs a lot
| of old Sierra games; it applies some patches to game scripts to
| fix these issues and also throttles some stuff to make it run
| at a proper speed on modern hardware.
| guessbest wrote:
| Computer software was incapable of the completely of modern
| languages and practices because stack overflow didn't exist.
| Until windows came along with a time api based on datetime
| most games just used CPU clock cycles like you identified.
| CPU speed is the ultimate time api.
| acomjean wrote:
| The only use of the "turbo" button on some PCs in the 90s I
| found was to slow the machine down for the few older games
| that didn't have code to deal with the various speeds.
| tpmx wrote:
| Related but completely different: All MS-DOS programs built with
| Turbo/Borland Pascal stopped working on ~200+ MHz Pentium CPUs
| because of a sleep calibration loop that ended up going too fast
| and causing a divide by zero in the runtime library:
|
| http://www.pcmicro.com/elebbs/faq/rte200.html
| nottorp wrote:
| Ohh I was there on my first programming job, still a student :)
| I think we recompiled the RTL and then distributed new binaries
| with the working RTL.
| TonyTrapp wrote:
| Up to this day I find it fascinating that the runtime error
| number just so perfectly coincides with the approximate speed
| at which the error will happen.
| the_af wrote:
| One early videogame that got this right was Alley Cat... in
| '83!
|
| The x86 version, coded in assembly language, works at the right
| speed regardless of CPU speed. MobyGames has this to say [1]:
|
| > "Alley Cat was one of the few games of the 1980s that was
| programmed with full attention to different PC speeds. It's an
| early, old game--yet it runs perfectly on any machine. The
| reason it runs on any computer today is, upon loading, the
| first thing it performs is a mathematical routine to determine
| the speed of your processor, and as of 2003 we've yet to build
| an Intel PC too fast to play it."
|
| I had a lot of fun with this game back in the day, when I got
| my first PC, an XT clone (with Hercules graphics card and
| "amber" monochrome CRT!).
|
| ----
|
| [1] https://www.mobygames.com/game/alley-cat
| tpmx wrote:
| I had a lot of fun with Alley Cat, Sopwith Camel, digger
| (https://digger.org/), GW-Basic and Turbo Pascal 3.0 or so
| when my computer finally got upgraded from a ZX81 (I got it
| in 1984 as a handmedown from a relative) to an Amstrad PC1512
| in 1988 (thanks, supportive parents).
|
| The British truly ruled the low-end computer market in Europe
| back then.
| the_af wrote:
| Lots of love for GW Basic here! It was my first "true"
| programming language. I started with C64 BASIC but it was
| too limited to do actual interesting things, at least while
| not resorting to the cheat of PEEK and POKE.
| TazeTSchnitzel wrote:
| I read somewhere that Windows 95 has _several_ such loops that
| have to patched to get it to run on modern CPUs.
| benjaminpv wrote:
| Yep, you still see it come up nowadays with people running it
| under virtualization.
|
| https://www.os2museum.com/wp/those-win9x-crashes-on-fast-
| mac...
| cf100clunk wrote:
| Another 68000 system time related bug but completely different
| problem (former Apollo/Domain sysadmin here, which I became at
| one company because I was already doing their Unix sysadmin
| job. We had to periodically reset the system clocks on those):
|
| ''The bug apparently results from the high 32 bits of the clock
| data type being declared as a 31 bit value instead of 32 bit in
| the pascal include files. The reason for this is lost to
| history, but early pascal compilers may have had problems with
| 32 bit unsigned numbers, possibly because the early Motorola
| 68000 processor chips didn't have 32 bit unsigned multiply
| operations.''
|
| https://jim.rees.org/apollo-archive/date-bug
| mikepavone wrote:
| > possibly because the early Motorola 68000 processor chips
| didn't have 32 bit unsigned multiply operations.
|
| This doesn't really make sense as an explanation. For both
| signed and unsigned multiply, the 68000 has a 16x16 -> 32
| multiply. This is indeed kind of inconvenient if you need to
| multiply 32-bit numbers, but 31-bit numbers are not any
| easier. If anything, the unsigned cases is easier to reason
| about than the signed one
| tpmx wrote:
| Do I understand this correctly - it was a 31-bit version of
| the 32-bit Y2K38 problem?
| xxpor wrote:
| The year 2038 problem _is_ a 31 bit problem: time_t is
| signed (in POSIX).
| tpmx wrote:
| So, was it a 30-bit problem?
| KMag wrote:
| It looks like maybe Apollo/Domain systems use an epoch of
| 1929-10-14 instead of 1970-01-01. Maybe that's the
| birthday of one of the early developers (or their
| spouse).
|
| There's nothing particularly magical about the Unix
| epoch.
| dylan604 wrote:
| I was too young to know/understand the hows and whys, but I do
| remember having a couple of programs that you had to disable
| the turbo button on the front of the computer for them to run.
| I always thought it silly to have a button to intentionally
| slow down the computer, but yet it absolutely was required.
| mypalmike wrote:
| "according to numberwang"
|
| Gotta love a subtle Mitchell and Webb reference.
| [deleted]
| pedrow wrote:
| Also Blade Runner reference
| BoardsOfCanada wrote:
| So wait, you can just skip the division and look at when Time
| changes since that already is in seconds?
| smm11 wrote:
| I can't get Aldus Pagemaker to run on my Watch either.
| recursive wrote:
| What have you tried?
| ajross wrote:
| The "real" bug here is Motorola's. Having instructions that fail
| silently (vs. trapping, as DIVU actually does if the divisor is
| zero!) is just outrageous.
|
| For clarity, because the article takes too long to get there:
| DIVU has a 32 bit dividend, but a 16 bit divisor and a 16 bit
| result. So if you try to divide e.g. 0x20000 by 2, the result
| should be 0x10000, which doesn't fit in the output register.
| So... the CPU sets the overflow flag but otherwise does nothing!
|
| I'm not quite old enough to have been writing assembly for the
| 68k, but I've heard this issue before. This was surely not the
| only footgun situation with DIVU in the market.
| layer8 wrote:
| The rationale probably is that you can easily check for
| division by zero before doing the actual division, whereas
| checking for an overflowing division requires to actually
| perform the division. The DIVU instruction thus allows to check
| for overflow without the overhead of raising an exception.
|
| It certainly is a bit of a footgun, because one normally
| doesn't expect overflow on integer division. On the other hand,
| the other basic arithmetic operations all require overflow
| checks as well, or equivalent operand value analysis. And this
| is assembly we're talking about, where you're supposed to know
| what you're doing.
| jdwithit wrote:
| That may be a better choice, but it wouldn't have prevented the
| bug from the article, would it? Because at the time of release,
| the date didn't overflow. And after their goofy "subtract 5,000
| days" fix, it didn't overflow either. The only change would be
| the user experiencing a crash vs a hang.
| pm215 wrote:
| I would disagree. At the assembly level, as long as the
| instruction documents what it does in the oddball cases, it's
| up to you the programmer to use it correctly. Some CISC CPUs
| trap on errors like divide-by-zero; RISC instruction sets
| usually make the choice to define some specific behaviour. (For
| example on Arm, division by zero will always return a 0 result,
| and will not trap.) Taking an exception is an expensive
| business, and if the programmer wasn't expecting it then the
| result will be no better (program crashes, instead of program
| hangs).
| kevin_thibedeau wrote:
| > on Arm, division by zero will always return a 0 result, and
| will not trap.
|
| Cortex-M and -R have support for div-by-0 faults but it's
| disabled by default.
| error503 wrote:
| It's not really failing silently, it's telling you through the
| overflow flag. To me this seems logically consistent with how
| one would expect overflow to behave, as it does with other
| instructions like ADD.
|
| That said, I think this instruction would be safer and more
| useful if it still set at least the remainder result bits
| (which should always be valid). Then this case would not
| require checking, as would some other common cases like
| 'execute every odd iteration' kind of code.
| manv1 wrote:
| If it's documented it's not a bug, it's behavior.
|
| The developer should check for overflow before using the value.
| If this assembly is compiler generated then it's probably
| classifiable as a compiler bug.
| TillE wrote:
| > Future games with newer interpreters shipped with this same
| code instead of simply checking for overflow.
|
| That's the real punchline here, but well, game development has
| always been a mess. Automated testing of any kind is still rare,
| as far as I know.
| [deleted]
| Someone wrote:
| It's unlikely automated testing would have caught this first
| time round.
|
| The bug fix probably should have fixed this 'for good', but
| that shouldn't require a test, either.
___________________________________________________________________
(page generated 2023-01-13 23:00 UTC)