Newsgroups: can.usrgroup
Path: utzoo!henry
From: henry@utzoo.uucp (Henry Spencer)
Subject: Re: C code (fwd)
Message-ID: <1990Mar1.170000.28002@utzoo.uucp>
Organization: U of Toronto Zoology
References: <9002280838.AA01415@cohort.uucp>
Date: Thu, 1 Mar 90 17:00:00 GMT

In article <9002280838.AA01415@cohort.uucp> Steve Bird <steve@cohort.UUCP> writes:
>         float a,b;
>         b = 2.0e20 + 1.0;
>         a = b - 2.0e20;
>         printf("%f \n",a);
>  When compiled the program returns the number 4008175468544.000000 .
>
>         a = 2.0e20;
>         printf("%f \n",b - a);
>  The program returns 0.000000 . Why ?

Single-precision floating-point, aka "float", typically has about 24 bits
of precision.  Numbers circa 2e20, represented in circa 24 bits, have
roundoff error circa 1e13, so 4e12 and 0 probably differ by 1 in the least-
significant bit.  The arithmetic in these two programs is not quite the
same, because all C floating-point arithmetic is done in "double", and
you've got an extra double->float->double conversion between subtraction
and printing in the first case.  Somehow, the code your compiler is
generating is ending up introducing different roundoff behavior.  Asking
for a detailed explanation of a least-significant-bit difference is
pointless unless you're willing to examine compiler, generated code, and
hardware very closely; too many things can cause such differences.

"Floating point is an analog box anyway." -Hugh Redelmeier.  (Translation,
for the non-hardware types in the crowd:  "if there's only one or two bits
wrong at the least-significant end, you got what you paid for".)
-- 
MSDOS, abbrev:  Maybe SomeDay |     Henry Spencer at U of Toronto Zoology
an Operating System.          | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
