Newsgroups: comp.arch
Path: utzoo!henry
From: henry@utzoo.uucp (Henry Spencer)
Subject: Re: Round-off
Message-ID: <1988Jan22.170725.10453@utzoo.uucp>
Organization: U of Toronto Zoology
References: <189@mithras> <614@PT.CS.CMU.EDU> <4404@ecsvax.UUCP>, <1069@cpocd2.UUCP>
Date: Fri, 22-Jan-88 17:07:24 EST

> It was precisely the awfulness of IBM single precision that led Kernighan and
> Ritchie to make it a required feature of the C language that all floating
> point computations be done in double precision.

Sorry, not correct.  Remember that C was originally a system implementation
language for the pdp11, not the IBM mainframes.  Dennis Ritchie has talked
about this very issue in the past.  The biggest reason for the rules about
floating-point arithmetic was that the floating-point box in the pdp11 does
not have separate 32-bit and 64-bit instructions.  Instead it has a mode
bit to select floating-point width.  This was a major headache for code
generation, so Dennis cheated by just setting the bit once (the first
instruction of all pdp11 Unix programs is SETD) and doing everything in
double precision.  Contributing reasons were the difficulty of making sure
that function arguments were the right length otherwise, and, yes, the
greater accuracy.  He undoubtedly knew about the IBM rounding problem, and
it may have had some influence, but saying that it was *the* reason is
not right.
-- 
Those who do not understand Unix are |  Henry Spencer @ U of Toronto Zoology
condemned to reinvent it, poorly.    | {allegra,ihnp4,decvax,utai}!utzoo!henry
