[HN Gopher] Lies my calculator and computer told me (1987) [pdf]
       ___________________________________________________________________
        
       Lies my calculator and computer told me (1987) [pdf]
        
       Author : seanhunter
       Score  : 77 points
       Date   : 2023-09-19 11:19 UTC (11 hours ago)
        
 (HTM) web link (www.stewartcalculus.com)
 (TXT) w3m dump (www.stewartcalculus.com)
        
       | llimllib wrote:
       | Here's how you can use sympy (`pip install sympy`) or the decimal
       | module to get better numerical answers out of python:
       | # as shown in the text, fp math yields an inexact result.
       | # should be 3.310115e-5         $ python -c "from math import *;
       | print(8721*sqrt(3)-10_681*sqrt(2))"         3.31011488015065e-05
       | # https://docs.sympy.org/latest/modules/evalf.html         $
       | python -c "from sympy import *;
       | print(N(8721*sqrt(3)-10_681*sqrt(2)))"
       | 3.31011506606020e-5                  # the second argument to N
       | is the desired precision         $ python -c "from sympy import
       | *; print(N(8721*sqrt(3)-10_681*sqrt(2), 50))"
       | 0.000033101150660602022280988927734905539317579147087675
       | # or using the somewhat more awkward decimal module:
       | $python -c "from decimal import *;
       | print(Decimal(8721)*Decimal(3).sqrt() -
       | Decimal(10_681)*Decimal(2).sqrt())"
       | 0.00003310115066060202229
       | 
       | I'm not at all an expert at this, just wanted to play around with
       | some tools that have helped me with similar problems in the past.
       | 
       | I also don't know how accurate the digits following `3310115`
       | are! All I know is that these are ways to make the machine spit
       | out the correct number up to seven digits; I'd love it if
       | somebody would explain to me more of what I've done here tbh
        
         | aidenn0 wrote:
         | SymPy will numerically approximate the value of the equation
         | inside "N" to the number of specified decimal digits, so the
         | first 50 non-zero digits will be completely correct[1] when you
         | pass "50" as the second argument to N.
         | 
         | 1: I'm not sure how SymPy handles rounding, but it is typical
         | to round the last digit so e.g. printing just 5 digits of
         | 0.123456 would correctly be 0.12346
        
         | svilen_dobrev wrote:
         | further, python has fractions (with proper divisors' arithmetic
         | over them), so
         | 
         | >>> Fraction("2/3") * Fraction("66/13")
         | 
         | Fraction( 44, 13)
        
         | Majromax wrote:
         | What you've done here is tell SymPy to use extra precision for
         | the intermediate (and final) output. This doesn't truly _fix_
         | the problem of cancellation and loss of precision, but for many
         | practical purposes it can postpone the problem long enough to
         | give you a useful result.
         | 
         | Internally, SymPy uses mpmath (https://mpmath.org/) for
         | representation of numbers to arbitrary precision. You could
         | install and use the latter library directly, gaining extra
         | precision without going through symbolic manipulation.
         | 
         | All that being said, it's still good practice to avoid loss of
         | precision at the outset. Arbitrary-precision calculations are
         | slow compared to hardware-native floating point operations.
         | Using the example from mpmath's homepage in iPython:
         | In [1]: import mpmath as mp; import scipy as sp; import numpy
         | as np                  In [2]: mp.mp.dps=50 # set extended
         | precision                  In [3]: %%timeit            ...:
         | mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2
         | 40.5 ms +- 4.74 ms per loop (mean +- std. dev. of 7 runs, 10
         | loops each)                  In [4]: mp.quad(lambda x:
         | mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2         Out[4]:
         | mpf('3.1415926535897932384626433832795028841971693993751015')
         | In [5]: %%timeit            ...: sp.integrate.quad(lambda x:
         | np.exp(-x**2), -np.inf, np.inf)[0]**2         570 us +- 78.9 us
         | per loop (mean +- std. dev. of 7 runs, 1,000 loops each)
         | In [6]: sp.integrate.quad(lambda x: np.exp(-x**2), -np.inf,
         | np.inf)[0]**2         Out[6]: 3.1415926535897927
        
       | masswerk wrote:
       | There was recently an interesting thread on the Retro Computing
       | Forum [1] related to this and curious behavior in 6502 flavors of
       | MS BASIC. In brief, the condition in                 IF SQR(3 *
       | 2) = SQR(3) * SQR(2) THEN...
       | 
       | evaluates to false, while,                 IF SQR(3 * 2) - SQR(3)
       | * SQR(2) = 0 THEN...
       | 
       | evaluates to true. Similarly,                 A = SQR(3 * 2) -
       | SQR(3) * SQR(2)
       | 
       | assigns zero (0.0) to variable A. So,                 SQR(3 * 2)
       | <> SQR(3) * SQR(2)
       | 
       | but                 SQR(3 * 2) - SQR(3) * SQR(2) = 0
       | 
       | Which is kind of a nice paradox. (The same is true for a variety
       | of other value combinations.) We learn, the value of an
       | expression in a comparison is not the same as its result
       | (probably due result cleanup, especially, when the exponent
       | happens to be zero.)
       | 
       | [1] https://retrocomputingforum.com/t/investigating-basic-
       | floati...
        
         | lmm wrote:
         | > We learn, the value of an expression in a comparison is not
         | the same as its result (probably due result cleanup,
         | especially, when the exponent happens to be zero.)
         | 
         | That happens all the time on traditional x86 - (non-SSE)
         | floating point registers are 80 bits and then (usually) get
         | rounded to 64 bits when you store them in memory, so you will
         | very often have an expression that's nonzero if you use it
         | immediately, but zero if you put it in a variable and use it
         | later.
        
           | masswerk wrote:
           | In this case, there are two mechanisms at work: a
           | conventional rounding bit (for each of the floating point
           | accumulators available), and a special routine, which zeros
           | out all bytes of the floating point accumulator, when the
           | result is transferred and the exponent is zero. I personally
           | suspect that it's the latter, which is not triggered
           | appropriately and which seems to be the culprit here.
           | Notably, this routine only applies to the first of these
           | accumulators. (It seems, though, that the routine should be
           | called, when a value is copied from one FAC to the other,
           | which we would expect to happen here. So I'm not sure.)
        
       | tiahura wrote:
       | See also, Intel(r) Decimal Floating-Point Math Library
       | https://www.intel.com/content/www/us/en/developer/articles/t....
        
       | Obscurity4340 wrote:
       | 5318008
        
       | basemi wrote:
       | On my linux box:                   $ echo "(2/3)*3-2" | bc -l
       | -.00000000000000000002
        
         | teddyh wrote:
         | In the terminal I always use "dc", where, besides being RPN,
         | you have to explicitly set the precision. I.e. the result of
         | 2/3 (or "2 3 /") in dc is always "0" unless you set a precision
         | beforehand. This makes it clear that the result is arbitrary.
        
         | knodi123 wrote:
         | why not pipe the input through a few more commands? :-P
         | bc -l -e "(2/3) * 3 - 2"
        
           | gibspaulding wrote:
           | Like this?
           | 
           | $ echo "(2/3)*3-2" | bc -l > /dev/null && echo "0"
        
           | 5e92cb50239222b wrote:
           | I don't know which system you're on, but GNU bc does not
           | support this flag.
           | 
           | On the other hand, bash and zsh support this:
           | bc -l <<<'(2/3) * 3 - 2'
           | 
           | edit: neither does busybox.
        
           | seanhunter wrote:
           | First prize would be if you could write a pipe which
           | meaningfully combines bc, cc, dc, fc, nc and wc. No other
           | commands allowed.
        
         | irishsultan wrote:
         | bc and dc are arbitrary precision. By using -l you are
         | specifying that it should keep track of 20 decimal digits (plus
         | you are importing some extra stuff).
         | 
         | You can try higher precision by setting the scale.
         | $ echo "scale = 100; (2/3)*3 - 2" | bc          -.0000000000000
         | 00000000000000000000000000000000000000000000000000000\
         | 0000000000000000000000000000000002
        
           | jepler wrote:
           | a neat calculator is spigot. It has no trouble getting
           | exactly 0 for this calculation:                   $ spigot
           | '(2/3)*3-2'         0
           | 
           | furthermore, when you specify a precision number, this sets
           | the precision of the final result, NOT the precision of
           | intermediaries (which is what bc & dc, and many other
           | arbitrary precision calculators do); every printed digit is
           | correctly rounded. [edited to add: truncated towards zero by
           | default; other rounding modes available as commandline flag]
           | $ spigot -d72 'sin(1)-cos(1)'         0.301168678939756789251
           | 565714187322395890252640180448838002654454610810009
           | 
           | There are still conditions where spigot can't give an answer,
           | such as                   $ spigot 'sin(asin(0.12345))'
           | 
           | .. spigot can never determine the next correctly rounded
           | digit after "0.1234" with certainty, for reasons elucidated
           | in the documentation.
           | 
           | spigot is in debian and here's the author's page with links
           | to a lot more background:
           | https://www.chiark.greenend.org.uk/~sgtatham/spigot/
        
             | zozbot234 wrote:
             | > spigot can never determine the next correctly rounded
             | digit after "0.1234" with certainty, for reasons elucidated
             | in the documentation.
             | 
             | Because spigot rounds towards zero in that mode, and it can
             | only bound the result as arbitrarily close to 0.12345 -
             | i.e., it can never decide between 0.12344999..99something
             | and 0.12345000..00something because the "something" part
             | that might break the tie never appears. This is a general
             | issue with exact real computation; in more complicated
             | cases, it can even be undecidable whether some expression
             | is exactly zero.
        
         | jeroenhd wrote:
         | Stuff like this is why I use Python for basic math
         | 
         | On my phone:                   ~ $ python -c "print( (2 / 3) *
         | 3 - 2 )"         0.0         ~ $ python --version
         | Python 3.11.5
        
       | jeroenhd wrote:
       | Testing these problems in
       | https://www.cemetech.net/projects/jstified/ seems to indicate
       | that "modern" (as in, late 90s) calculators have these problems
       | well under control.
       | 
       | Android's built in calculator on my phone and tablet (take note,
       | Apple) both seem to get the answer right as well.
       | 
       | Fun problems for people trying to build their own calculators,
       | but not really a problem if you're actually trying to solve
       | equations using existing ones.
        
         | graywh wrote:
         | and yet today I saw a website calculate 3.3 x 4 to be
         | 13.200000000000003
        
         | TheTon wrote:
         | Apple's calculator seems to get the first set of problems from
         | the pdf right as well. Did you test with it and get something
         | different?
        
           | kccqzy wrote:
           | Consider this example from the article where it asks you to
           | calculate the limit of ln(1+h)/h as h tends to zero. Set h to
           | be 10^(-17). The Android calculator reliably succeeds and
           | gives the same answer as Mathematica. The iOS calculator
           | reliably fails; at least it knows its limits and refuses to
           | give even an approximate answer.
        
         | kccqzy wrote:
         | The Android calculator is indeed super amazing. There's a super
         | interesting paper behind the tech (constructive reals):
         | https://cacm.acm.org/magazines/2017/8/219594-small-data-comp...
         | written by Boehm (the same person behind the eponymous garbage
         | collector).
        
       | readyplayernull wrote:
       | > You see, when formulas create a circular reference, Excel will
       | run that computation up to a number of times. If, in those
       | computations, the magnitude of the difference between the most
       | recent and previous computed values for the cell falls below some
       | pre-defined epsilon value (usually a very small number, like
       | 0.00001), Excel will stop recomputing the cell and pretend like
       | it finished successfully.
       | 
       | https://basta.substack.com/p/no-sacred-masterpieces
        
         | dan-robertson wrote:
         | Don't you need to go into the settings to turn on circular
         | references (ie iterative calculation) and then you can choose
         | the limits/tolerance?
        
       | pif wrote:
       | Not reading
       | "https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h..."
       | considered harmful.
        
       | nickcw wrote:
       | I tried (2 / 3) * 3 - 2 on my Casio FX-180P [1] (now almost 40
       | years old!) and on Android Realcalc [2] (which mimics a very
       | similar calculator) and I got 0 on both.
       | 
       | Same with sqrt(6)^2 - 6 - 0 on both. (-1)^5 works fine too.
       | 
       | So full marks for the ancient Casio and its emulator on the
       | initial quality tests!
       | 
       | [1] https://www.calculator.org/calculators/Casio_fx-180P.html
       | 
       | [2]
       | https://play.google.com/store/apps/details?id=uk.co.nickfine...
       | 
       | Note that [1] claims the calculator came out in 1985 but I'm sure
       | I got mine in 1983 or 1984
        
       | cratermoon wrote:
       | My favorite site for FP math fun:
       | https://0.30000000000000004.com/
       | 
       | Also fun, Muller's recurrence:
       | https://scipython.com/blog/mullers-recurrence/
        
       | dang wrote:
       | Looks like _Calculus: Early Transcendentals_ was first published
       | in 1987, so I 've put that year above until proven otherwise.
       | 
       | https://www.google.com/search?tbo=p&tbm=bks&q=%22To+have+a+f...
        
       | quickthrower2 wrote:
       | Floating point values are not numbers. They are ranges. They are
       | approximately a field like entity-ish.
        
         | orlp wrote:
         | IEEE 754 floating point _are_ numbers, they are not ranges.
         | Every operation rounds to a representable number in the set.
        
         | BeetleB wrote:
         | Everyone is correcting you, but no one is pointing out the real
         | culprit. Floating point operations are not the same as math
         | operations (i.e. +, -, *, / behave differently).
        
           | ska wrote:
           | I think what you mean here is that floating point numbers and
           | the usual operators (+,*) and inverses (-,/) do not work the
           | same as Reals (i.e. they don't form a field), but people
           | often behave as if they do.
           | 
           | This is a common confusion.
        
           | gilcot wrote:
           | IEEE 754 is well defined so that $a+b=b+a$ But it can't give
           | you warranty that $(a+b)+c=a+(b+c)$ (well known error
           | propagation issue)
        
             | BeetleB wrote:
             | IEEE guarantees commutativity, but does not guarantee the
             | _correctness_ of a+b.
             | 
             | My point is that floating point numbers are real numbers,
             | but the usual operations on them will not give you the same
             | results as the operations on real numbers.
        
         | Majromax wrote:
         | Floating point values are binary scientific notation with a
         | fixed precision. All of the unintuitive behaviour of floats is
         | repeated if you imagine conducting decimal calculations
         | yourself in said scientific notation, with intermediate
         | rounding at every step.
         | 
         | Floats are only ranges in that each real number (within the
         | overall range limit) has a unique floating point representation
         | to which it is closest, up to rounding at the precise interval
         | boundary.
         | 
         | Floats can be _extended_ to interval arithmetic, but doing so
         | requires the extra work of keeping track of interval limits.
         | When calculations lose precision (through cancellation of
         | digits in the floating point representation), the interval of
         | uncertainty is larger than the floating point precision.
        
         | repiret wrote:
         | In what way do they behave like ranges?
         | 
         | I generally prefer to think of them as they are: able to
         | represent a subset of the rationals, plus a couple other weird
         | values, but not a field, and importantly, not obeying any of
         | the usual algebraic laws with arithmetic.
        
           | lmm wrote:
           | Yep. Floating point addition is not even associative, so
           | they're not only not a field, but not a ring, not a group,
           | nor even a monoid.
        
         | voluntaryeuth wrote:
         | Can you point me to a reference?
         | 
         | Every floating point article (including IEEE 754) I've seen
         | treats normal floating point numbers as dyadic rationals.
        
         | tgv wrote:
         | I don't think you can see them as a range. Sure, you can think
         | of the float representation of 0.1 and 0.2 as a small range
         | around those values, but what does that make 0.1 + 0.2, or 0.1
         | * 0.2, or 0.1 ** 0.2? Now the resulting float no longer
         | represents the range of potential conversion error.
         | 
         | And at the same time, integer values (up until some decently
         | high numbers) are represented correctly, so they would need a
         | different range.
        
       ___________________________________________________________________
       (page generated 2023-09-19 23:00 UTC)