Newsgroups: comp.arch
Path: utzoo!henry
From: henry@zoo.toronto.edu (Henry Spencer)
Subject: Re: IEEE arithmetic (Goldberg paper)
Message-ID: <1991Jun1.224243.13260@zoo.toronto.edu>
Date: Sat, 1 Jun 1991 22:42:43 GMT
References: <9106010224.AA28532@ucbvax.Berkeley.EDU>
Organization: U of Toronto Zoology

In article <9106010224.AA28532@ucbvax.Berkeley.EDU> jbs@WATSON.IBM.COM writes:
>>My understanding is that the funnier-looking features, in particular the
>>infinities, NaNs, and signed zeros, mostly cost essentially nothing.
>
>          I believe this statement is incorrect.  Infs and nans compli-
>cate the design of the floating point unit...

I've had some mail from people who've designed FPUs.  They say that the infs
make no difference to speak of, nor do the quiet NaNs.  They do say that the
signalling NaNs are a pain -- as is anything that can cause an exception --
on a pipelined machine with tight constraints on exception behavior.

>          I guess I was a bit obscure here.  On IBM hex systems when
>floating overflow occurs the default behavior is for an error message
>to be printed which includes a traceback pointing to the offending
>instruction.  Execution then continues (with the max floating number as
>a fixup) but will terminate if more than a small number (5?) of over-
>flows occur.  I was under the impression that this was typical behavior
>for pre-IEEE systems.  Is that not the case?

Not the ones I've used.  You could tell them to do this, but then you
can tell an IEEE system to do it too.

>          It seems to me that the existence of infs in IEEE is in-
>creasing the likelihood of obtaining plausible-looking garbage.

I'm not sure how.  Certainly forcing a result to inf is much better than
forcing it to some random number (e.g. the largest representable one),
because infs propagate through successive calculations, and generally don't
get turned back into plausible-looking garbage.

>>To me it sounded like "the precise number of bits was not thought to be
>>crucial, so the limitations of a very important existing implementation
>>were taken into consideration"...
>          I was unaware the Intel 8087 could be considered a "very
>important existing implementation" as compared to the DEC format for
>example.  Do you have some evidence for this?

Its implementation was well underway at the time and it plausibly was going
to sell far more FPUs than any DEC design.  Besides, you're missing the point
slightly:  *in the context of IEEE arithmetic*, the 8087 was an important
implementation.  The DEC implementations were irrelevant to the question of
how long IEEE extended formats should be, since they were not IEEE at all,
the campaign to base IEEE on DEC FP having failed (according to Goldberg,
mostly because DEC format had no room for things like infs and NaNs).

>          79 bits is clearly an absurd length for a floating point
>format (the same is true of the 43 bit single extended format)...

They are minima, not precise requirements, as witness the people who
implement single extended as 64-bit double so they can claim conformance
with the standard's very strong recommendation that there be an extended
format for every "normal" format.

>... The rationale behind the extended formats:
>providing extra precision for intermediate terms is ridiculous.  If the
>added precision is available it should always be used otherwise it is
>of little value...

Having had some need to compute hypotenuses without overflow in code that
otherwise wanted to use single precision for speed, I'm afraid I disagree.
Whether it is "of little value" depends a lot on what you are doing.
-- 
"We're thinking about upgrading from    | Henry Spencer @ U of Toronto Zoology
SunOS 4.1.1 to SunOS 3.5."              |  henry@zoo.toronto.edu  utzoo!henry
