[HN Gopher] 0.1 and 0.2 Returns 0.30000000000000004 (2018)
       ___________________________________________________________________
        
       0.1 and 0.2 Returns 0.30000000000000004 (2018)
        
       Author : tingabing
       Score  : 43 points
       Date   : 2021-04-11 19:12 UTC (3 hours ago)
        
 (HTM) web link (qntm.org)
 (TXT) w3m dump (qntm.org)
        
       | dang wrote:
       | Related past threads (not about this article):
       | 
       |  _0.30000000000000004_ -
       | https://news.ycombinator.com/item?id=21686264 - Dec 2019 (402
       | comments)
       | 
       |  _0.30000000000000004_ -
       | https://news.ycombinator.com/item?id=14018450 - April 2017 (130
       | comments)
       | 
       |  _0.30000000000000004_ -
       | https://news.ycombinator.com/item?id=10558871 - Nov 2015 (240
       | comments)
       | 
       |  _0.30000000000000004_ -
       | https://news.ycombinator.com/item?id=1846926 - Oct 2010 (128
       | comments)
       | 
       | Resisting temptation to list floating-point math threads because
       | there are so many:
       | 
       | https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
        
       | kungito wrote:
       | HN is more and more like first semester coding class where the
       | professor always tells the "fun facts" but we have to be in the
       | same class every year
        
         | tediousdemise wrote:
         | Reminds me of long-standing problems in mathematics. The
         | problems will forever be amusing until some dark horse comes
         | out of nowhere with a formal proof/solution that stuns the
         | academic community.
        
         | BeetleB wrote:
         | Eternal September?
        
           | messe wrote:
           | If only the mind was spotless.
        
         | enriquto wrote:
         | just let the lucky thousand of today have their fun!
        
       | crazygringo wrote:
       | Indeed, and therefore:                 0.1 + 0.2 != 0.3
       | 
       | You can check it in the JavaScript console.
       | 
       | This actually makes me wonder if anyone's ever attempted a
       | floating-point representation that _builds in an error range_ ,
       | and correctly propagated/amplified error over operations.
       | 
       | E.g. a simple operation like "1 / 10" (to generate 0.1) would be
       | stored not as a single floating-point value, but really as the
       | _range_ between the closest representation greater than and less
       | than it. The same with  "2 / 10", and then when asking if 0.1 +
       | 0.2 == 0.3, it would find an _overlap in ranges_ between the
       | left-hand and right-hand sides and return _true_. Every floating-
       | point operation would then take and return these ranges.
       | 
       | Then floating point arithmetic could be used to actually reliably
       | test equality without ever generating false negatives. And if you
       | examined the result of calculation of 10,000 operations, you'd
       | also be able to get a sense of how off it might maximally be.
       | 
       | I've search online and can't find anything like it, though maybe
       | I'm missing an important keyword.
        
         | naniwaduni wrote:
         | > Then floating point arithmetic could be used to actually
         | reliably test equality without ever generating false negatives.
         | 
         | The flip side is that you generate plenty of false _positives_
         | once your error ranges get large enough. This happens pretty
         | readily if you e.g. perform iterations that are supposed to
         | keep the numbers at roughly the same scale.
        
         | AnimalMuppet wrote:
         | But what you would actually get is something like this:
         | x---0.1 + 0.2 ---x           x---0.3---x
         | 
         | That is, the range of 0.1 + 0.2 would be wider than the range
         | of 0.3. And now what do you do? There is overlap, so are they
         | equal? But there are parts that don't overlap, so are they
         | different?
        
           | crazygringo wrote:
           | Well right now you basically can't ever check for equality
           | with floating-point arithmetic and trust that two numbers
           | that should intuitively be equal are reported as equal.
           | 
           | For me, floating-point equality _would be_ if there are any
           | parts that overlap. Basically  "=" would mean " _to the
           | extent_ of the floating-point accuracy of this system, these
           | values could be equal ".
           | 
           | If you're doing a reasonably limited number of operations
           | with values reasonably larger than the error range, then it
           | would meet a lot of purposes -- you can add 0.5 somewhere in
           | your code, subtract 0.5 elsewhere, and still rely on the
           | value being equal to the original.
        
           | Nullabillity wrote:
           | Make equality checks illegal, and instead define specific
           | operations for contains and overlaps.
        
         | mvanaltvorst wrote:
         | That sounds interesting, but I would imagine it would become
         | very complicated once you start applying nontrivial functions
         | (discontinuous functions, for example). In that case the range
         | of possible values could actually become discontinuous. I would
         | imagine accounting for that is actually more computationally
         | expensive than just using arbitrary precision decimals.
        
           | OskarS wrote:
           | Yeah, you call tan() on that number, and suddenly your
           | interval is like most of the number line. Actually, you don't
           | even have to be that fancy: if the number is close to
           | epsilon, the error bars on 1/x would be huge.
        
             | yarg wrote:
             | Sure, but what's the use case for mathematics where you
             | don't know what side of an asymptote you're on?
        
               | [deleted]
        
             | crazygringo wrote:
             | But isn't that a feature, rather than a bug? It prevents
             | you from getting "false accuracy".
        
               | klyrs wrote:
               | Yes, that's a feature. If you're using interval
               | arithmetic and your result is an unreasonably large
               | interval, then there's a good chance the algorithm in use
               | is numerically unstable.
        
         | drsopp wrote:
         | Perhaps something like
         | https://pythonhosted.org/mcerp/index.html
        
         | 29athrowaway wrote:
         | You cannot compare floating point numbers like that.
         | 
         | The equality test in floating point numbers is comparing
         | against the epsilon.                   Math.abs(0.3 - (0.1 +
         | 0.2)) < Number.EPSILON
         | 
         | Which is the same you other languages.
         | 
         | Using the epsilon for comparison is not mentioned in the
         | article. Floating point absorption is also not mentioned in the
         | article.
         | 
         | This entire discussion and the fact this is on the front page
         | of HN is pretty disappointing and sad.
         | 
         | Is this really a surprise for you? if it is... have you ever
         | implemented any logic involving currency? You may want to take
         | another look at it.
        
         | AaronFriel wrote:
         | Interval arithmetic is what you're looking for, and there's an
         | IEEE standard and many implementations.
        
           | crazygringo wrote:
           | Thank you!!
           | 
           | Yes, that turns out to be exactly it [1]. Looks like there's
           | even at least one JavaScript library for it [2].
           | 
           | It seems like such a useful and intuitive idea I have to
           | wonder why it isn't a primitive in any of the common
           | programming languages.
           | 
           | [1] https://en.wikipedia.org/wiki/Interval_arithmetic
           | 
           | [2] https://github.com/mauriciopoppe/interval-arithmetic
        
             | yongjik wrote:
             | Currently, either a programmer understands that floating
             | point numbers are complicated beasts, or they don't, and
             | get surprised.
             | 
             | With interval arithmetic, either a programmer would
             | understand that floating point numbers are not actually
             | numbers but intervals... or they wouldn't, and get
             | surprised.
             | 
             | So I don't really see much upside. If you _know_ that you
             | need interval arithmetic, chances are that you 're already
             | using it.
        
             | gugagore wrote:
             | Interval arithmetic certainly has its place. However, you
             | don't find it used more often because a naive
             | implementation results in intervals that are often
             | uselessly huge.
             | 
             | Consider x in [-1, 1], and y in [-1, 1]. x*y is also in
             | [-1,1], and x-y in [-2, 2]. But now consider actually that
             | y=x. That's consistent, but our intervals could be smaller
             | than what we've computed.
        
               | crazygringo wrote:
               | Sure, but wouldn't realistic intervals be more like x in
               | [0.29999999999999996, 0.30000000000000004]?
               | 
               | I mean, intervals as large as whole numbers might make
               | sense if your calculations are dealing with values in the
               | trillions and beyond... but isn't the point of interval
               | arithmetic to deal with the usually tiny errors that
               | occur in FP representation?
        
             | enriquto wrote:
             | > It seems like such a useful and intuitive idea I have to
             | wonder why it isn't a primitive in any of the common
             | programming languages.
             | 
             | It is basically useless for numerical computation when you
             | perform iterations. Even good convergent algorithms can
             | diverge with interval arithmetic. As you accumulate
             | operations on the same numbers, their intervals become
             | larger and larger, growing eventually to infinity. It has
             | some applications, but they are quite niche.
        
               | lifthrasiir wrote:
               | It is true that error intervals eventually go to
               | infinity, but the very point is to keep them small enough
               | to be useful throughout the calculation. IA is pretty bad
               | for that but other approaches like affine arithmetic can
               | be much better. (It still doesn't make an approachable
               | interface for general programming languages though.)
        
               | crazygringo wrote:
               | But if the intervals are growing to infinity, then should
               | you be trusting your result at all?
               | 
               | Are there really cases where current FP arithmetic gives
               | an accurate result, but where the error bounds of
               | interval arithmetic would grow astronomically?
               | 
               | It seems like you'd have to trust FP rounding to always
               | cancel itself out in the long run instead of potentially
               | accumulating more and more bias with each iteration. Is
               | that the case?
               | 
               | Wouldn't the "niche" case be the opposite -- that
               | interval arithmetic is the general-purpose safe choice,
               | while FP algorithms without it should be reserved for
               | those which have been mathematically proven not to
               | accumulate FP error? (And would ideally output their own
               | bespoke, proven, interval?)
        
               | enriquto wrote:
               | > But if the intervals are growing to infinity, then
               | should you be trusting your result at all?
               | 
               | Most often, yes; the probability distribution of your
               | number inside that interval is not uniform, it is most
               | likely very concentrated around a specific number inside
               | the interval, not necessarily its center. After a few
               | million iterations, the probability of the correct number
               | being close to the boundary of the interval is smaller
               | than the probability of all your atoms suddenly
               | rearranging themselves into an exact copy of Julius
               | Caesar. According to the laws of physics, this
               | probability is strictly larger than zero. Would you think
               | it "unsafe" to ignore the likelihood of this event? I'm
               | sure you wouldn't, yet it is certainly possible. Just
               | like the correct number being near the boundaries of
               | interval arithmetic.
               | 
               | Meanwhile, the computation using classical floating
               | point, typically produces a value that is effectively
               | very close to the exact solution.
               | 
               | > It seems like you'd have to trust FP rounding to always
               | cancel itself out in the long run instead of potentially
               | accumulating more and more bias with each iteration. Is
               | that the case?
               | 
               | The whole subject of numerical analysis deals with this
               | very problem. It is extremely well known which kinds of
               | algorithms can you trust and which are dangerous (the so-
               | called ill-conditioned algorithms).
        
         | [deleted]
        
       | [deleted]
        
       | neilv wrote:
       | One of the many reasons I think we all would've been better off,
       | had Brendan Eich decided he'd been able to simply use Scheme
       | within the crazy time constraint he'd been given, rather than
       | create JavaScript, :) is that Scheme comes with a distinction
       | between exact and inexact numbers, in its numerical tower:
       | 
       | https://en.wikipedia.org/wiki/Numerical_tower
       | 
       | One change I'd consider making to Scheme, and to most high-level
       | general-purpose languages (that aren't specialized for number-
       | crunching or systems programming), is to have the reader default
       | to reading numeric literals as exact.
       | 
       | For example, the current behavior in Racket and Guile:
       | Welcome to Racket v7.3.         > (+ 0.1 0.2)
       | 0.30000000000000004         > (+ #e0.1 #e0.2)         3/10
       | > (exact->inexact (+ #e0.1 #e0.2))         0.3
       | 
       | So, I'd lean towards getting the `#e` behavior without needing
       | the `#e` in the source.
       | 
       | By default, that would give the programmer in this high-level
       | language the expected behavior.
       | 
       | And systems programmers, people writing number-crunching code,
       | would be able to add annotations when they want an imprecise
       | float or an overflowable int.
       | 
       | (I'd also default to displaying exact fractional rational numbers
       | using familiar decimal point conventions, not the fractional form
       | in the example above.)
        
         | 29athrowaway wrote:
         | Many languages make a distinction between floating-point
         | numbers and fixed-point numbers. Fixed-point numbers (e.g.:
         | "Decimal" / "BigDecimal" in Java) do not suffer from this
         | problem.
        
         | Wowfunhappy wrote:
         | This would make particular sense in a language like python,
         | which no one (should?) be using for systems programming.
        
           | neilv wrote:
           | Agreed. Though, for _reasons_ , I had to write essentially a
           | userland device driver in Python (complete with buffer
           | management and keyboard decoder). It was rock-solid in
           | production, in remote appliances, and I was very glad Python
           | was up to that. :)
        
         | vincent-manis wrote:
         | A Scheme system that implemented exact reals as unnormalized
         | floating decimal (IEEE 754-2008), coupled with a directive that
         | said `numbers with a decimal point should/should not be read as
         | exact' would be wonderful, not just for financial things, but
         | also for teaching students.
        
           | neilv wrote:
           | It's actually easy to implement that slight variation in
           | Racket, as a `#lang` or reader extension.
           | 
           | As an example of a Scheme-ish `#lang`, here's a Racket `#lang
           | sicp` that I made to mimic MIT Scheme, as well as add a few
           | things needed for SICP: https://github.com/sicp-
           | lang/sicp/blob/master/sicp/main.rkt
           | 
           | It would be even easier to make a `#lang better-scheme`, by
           | defining just a few changes relative to `racket-base`, such
           | as how numbers are read.
        
       | arduinomancer wrote:
       | Does this mean I could write a calculator in JavaScript which is
       | more accurate than the language but not as fast?
       | 
       | For example: just treat numbers as strings and write code that
       | adds the digits one by one and does the right carries
       | 
       | Now that I think about it, is this the whole point of the Java
       | BigDecimal class?
        
         | jeffbee wrote:
         | Yes, and it would be inexcusable malpractice to implement a
         | calculator using the native floating-point type.
        
           | spicybright wrote:
           | lol, malpractice. What if you want a calculator that
           | specifically uses native floating point math, like to aid in
           | programming, or just playing with the datatype?
        
         | dsego wrote:
         | It's been done, there are libraries out there.
        
         | teachingassist wrote:
         | You can likely do better than this within rational numbers by
         | working with integer numerator and denominator; you'll still
         | have to make compromises for irrational numbers.
        
       | 29athrowaway wrote:
       | Do it in Python and many other languages you'll get the same
       | result.                   >>> 0.1 + 0.2
       | 0.30000000000000004
       | 
       | That's the expected behavior of floating-point numbers, more
       | specifically, IEEE 754.
       | 
       | If you don't want this to happen, use fixed-point numbers, if
       | they're supported by your language, or integers with a shifted
       | decimal point.
       | 
       | Personally, I think if you don't know this, it's not safe for you
       | to write computer programs professionally, because this can have
       | real consequences when dealing with currency.
        
       | IncRnd wrote:
       | This is why games and certain other types of coding use fixed
       | point arithmetic.
        
       | arnon wrote:
       | Floating point considered harmful
       | 
       | Edit: this is not a blanket statement. It was meant in the
       | context.
        
         | dragonwriter wrote:
         | Not so much floating point as "using float point type for exact
         | decimal literals".
        
           | arnon wrote:
           | Fair...
        
         | phoe-krk wrote:
         | Misunderstanding floating point is more harmful than floating
         | point itself.
        
           | arnon wrote:
           | Hence why it's harmful.
        
             | MaxBarraclough wrote:
             | Floating point arithmetic powers everything from computer
             | graphics to weather simulations. Seems rather silly to
             | dismiss it as an anti-pattern.
        
             | OskarS wrote:
             | Floating point is fine. Non-integers are inherently tricky
             | to represent, especially when you have to pack it into 32
             | bits. You could maybe quibble with some of the decisions
             | around NaN and denormals and things like that, but mostly
             | IEEE-754 got it right. There's a reason it's been the
             | standard for three and a half decades now, and it's served
             | the computer industry very well.
             | 
             | Incidentally: the fact that 0.1+0.2 does not equal 0.3 is
             | not something you could quibble with, that is absolutely
             | reasonable for a floating point standard. It would be
             | insanity to base it off of base 10 instead of base 2.
        
               | aidenn0 wrote:
               | Of course it shouldn't be base 10. It should be base 60.
               | That has more prime factors _and_ wastes fewer bits than
               | BCD does.
        
               | lmilcin wrote:
               | Or base 256 so that it fits a byte efficiently. Oh...
               | wait...
        
               | UncleMeat wrote:
               | I think one of the problems today is that floating point
               | remains the default even for scripting languages like
               | python. I'd wager that the huge majority of users of
               | floating point arithmetic in their programs actually want
               | correct math rather than efficient operations. This feels
               | a bit like Random vs SecureRandom. So many people use the
               | default and then accidentally shoot themselves in the
               | foot. IMO, for scripting languages we should have
               | infinite precision math by default and then be able to
               | use floating point if you really care about speed.
        
               | edflsafoiewq wrote:
               | What precise number representation do you want that's
               | free of such "surprises"?
        
               | lmilcin wrote:
               | There exists no representation without "surprises" and it
               | is easy to show why that must be.
               | 
               | The closest you can get is some systems like
               | Matlab/Octave that specialize in working with numbers.
        
               | lifthrasiir wrote:
               | This is not a trivial decision because rational numbers
               | can have an unbounded denominator and it can cause a
               | serious performance problem. Python explicitly ruled
               | rational numbers out due to this issue [1]. There are
               | multiple possible answers:
               | 
               | - Keep rational numbers; you also have an unbounded
               | integer so that's to be expected. (Well, unbounded
               | integers are visible, while unbounded denominators are
               | mostly invisble. For example it is very rare to
               | explicitly test denominators.)
               | 
               | - Use decimal types with precision controlled in run
               | time, like Python `decimal` module. (That's still
               | inexact, also we need some heavy language and/or runtime
               | support for varying precision.)
               | 
               | - Use interval arithmetic with ordinary floating point
               | numbers. (This would be okay only if every user knows
               | pitfalls of IA: for example, comparison no longer returns
               | true or false but also "unsure". There may be some clever
               | language constructs that can make this doable though.)
               | 
               | - You don't have real numbers, just integers. (I think
               | this is actually okay for surprisingly large use cases,
               | but not all.)
               | 
               | [1] https://python-history.blogspot.com/2009/02/early-
               | language-d... (search for "anecdote")
        
               | UncleMeat wrote:
               | Sure, every solution has problems. If there was an
               | obvious alternative with little downside I think we'd
               | have adopted it decades ago.
               | 
               | I just suspect that alternatives bite people less
               | frequently than floating point does.
        
         | bidirectional wrote:
         | Not at all. For all it's faults, floating point is _incredibly_
         | fast. It 's not some convenience hack that we lazy programmers
         | have come up with, it's an incredibly quick way to do numerical
         | computation. It will always have it's place (sometimes even in
         | finance to represent money, to many people's shock).
        
           | lifthrasiir wrote:
           | To add to that, in modern processors FP calculation is faster
           | than integer calculation, both in terms of latency and
           | throughput (as long as you don't hit subnormal numbers). This
           | is very unintuitive and mostly due to disproportional
           | demands.
        
         | lmilcin wrote:
         | "People repeating stuff without understanding it considered
         | harmful."
         | 
         | Floating point is extremely useful. Too bad so many people have
         | no idea how and when to use it. Including some people that
         | design programming languages.
         | 
         | Please, tell me, mister, how would you perform complex
         | numerical calculations efficiently?
         | 
         | I guess we should just forget about drones and bunch other
         | stuff because 90% of developers have no clue how to use FP?
        
           | lifthrasiir wrote:
           | > Please, tell me, mister, how would you perform complex
           | numerical calculations efficiently?
           | 
           | If your calculation turned out to be incorrect it doesn't
           | matter if it's efficient. Correct FP calculation requires
           | error analysis, which is a concrete definition of "how to use
           | it". If you mostly use packaged routines like LAPACK, then
           | you don't exactly need FP; you need routines that internally
           | use FP.
        
             | lmilcin wrote:
             | > Correct FP calculation requires error analysis
             | 
             | No, it does not. Please, don't make it seem harder than it
             | needs to.
             | 
             | 99% applications, if you don't do anything stupid you are
             | completely fine.
             | 
             | If you care for precision so much the last digit make
             | difference for you you are probably one of very few cases.
             | I remember somebody giving an example circumference of
             | solar system showing uncertainty of the value of Pi
             | available as FP to cause couple centimeters of error at the
             | orbit of Pluto, or something like that.
             | 
             | (Edit: found it: https://kottke.org/16/03/how-many-digits-
             | of-pi-does-nasa-use)
             | 
             | Most of the time floating point input is already coming
             | with its own error, you are just adding calculation error
             | to the input uncertainty. But the calculation error is so
             | much smaller than in most cases it is safe to ignore it.
             | 
             | For example, if you program a drone, you have readings from
             | IMU which have not nearly the precision of the double or
             | even float you will be working on.
             | 
             | There is also various techniques of ordering the operations
             | to minimize the resulting error. If you are aware which
             | kinds of operations in which situations can cause huge
             | resulting error it is usually very easy to avoid it.
             | 
             | Only very special case is if you try to subtract two values
             | calculated separately and matching almost exactly. This
             | should be avoided.
        
               | UncleMeat wrote:
               | > if you don't do anything stupid you are completely fine
               | 
               | In this case, "doing anything stupid" includes equality
               | checks. This is the sort of footgun that a huge number of
               | people are going to run into.
        
               | lifthrasiir wrote:
               | > 99% applications, if you don't do anything stupid you
               | are completely fine.
               | 
               | This is a common misconception. Or "anything stupid" is
               | quite broader than you think.
               | 
               | The main issue with FP arithmetic is that effectively
               | every calculation incurs an error. You seem to aware of
               | catastrophic cancellation, but it is problematic not
               | because of the cancellation but because the error is
               | proportional to the input, which can wildly vary after
               | the cancellation. Non-catastrophic cancellation can still
               | cause big errors in a long run for a large range of
               | magnitudes. A long-running process thus requires some
               | guard against FP errors, even if it's just a reality
               | check (like, every object in the game should be in a
               | skybox).
               | 
               | Alternatively you do not run a long-running process. I
               | don't think you fly a drone for days? Probably you won't
               | fly a drone that long even without FP, but using FP does
               | prohibit many things and thus affects your decision. For
               | example game simulations tend to avoid FP because it can
               | accumulate errors and it is also hard to do FP
               | calculation in a repeatable way (in typical
               | environments). If I chose to use FP for simulations I
               | effectively give up running simulations both in servers
               | and clients---that kind of things.
        
               | lmilcin wrote:
               | I work on financial risk calculations during day and
               | embedded control devices after my official work hours.
               | 
               | My controller running moving horizon estimator to
               | simulate thermal system of the espresso machine and
               | Kalman filters to correct model parameters against
               | measurements runs 50 times a second and works fine for
               | days, thank you, using floats on STM32.
               | 
               | I have written huge amount of embedded software over past
               | 20 years and I am yet to do any error analysis with focus
               | on actual floating point inaccuracy. Never saw any
               | reduced performance of any of my devices due to FP
               | inaccuracy. That's because I know what kind of things can
               | cause those artifacts and I just don't do that stupid
               | shit.
               | 
               | What you describe are just no-noes you should not be
               | doing in software.
               | 
               | There exists huge amount of software that works for years
               | at a time, heavy in FP math, and it works completely
               | fine.
               | 
               | If drones were not limited in flight time they would also
               | work fine because what you describe is just stupid errors
               | and not a necessity when using FP math.
               | 
               | Game simulations don't use FP mostly because it is
               | expensive. Fixed point is still faster and you can
               | approximate results of most calculations or use look up
               | tables to get the result that is just fine.
               | 
               | Some games (notably Quake 1 on the source code which I
               | worked after it became available) do require that the
               | simulation results between client and server are in
               | lockstep but this is in no way necessity. Requiring
               | lockstep between client and server causes lag.
               | 
               | What newer games do, they allow client and server run
               | simulations independently even when they don't agree
               | exactly, and then reconcile the results from time to
               | time.
               | 
               | If you ever saw your game partner "stutter" in flight,
               | that was exactly what happened -- the client and the
               | server coming to an agreement on what is actual reality
               | and moving stuff into its right place.
               | 
               | In that light, there is absolutely no problem if after
               | couple of frames the object differs from official
               | position by 1 tenth of a pixel, it is going to get
               | corrected soon.
               | 
               | Go do your "error analysis" if you feel so compelled, but
               | don't be telling people this is necessary or the software
               | isn't going to work. Because that's just stupid. Maybe
               | NASA does this. You don't need to.
        
       | The_rationalist wrote:
       | The problem is solved in many languages such as Java by suffixing
       | F to the numbers.
        
       | hprotagonist wrote:
       | It sure does: https://0.30000000000000004.com/
        
         | tyingq wrote:
         | Their summary of Mysql 5.6
         | (https://0.30000000000000004.com/#mysql) isn't telling the
         | whole story.
         | 
         | "SELECT .1 + .2;" does return 0.3
         | 
         | However,                 CREATE TABLE t1 (f FLOAT);
         | INSERT INTO t1 VALUES(0.1),(0.2);       SELECT SUM(f) FROM t1;
         | // returns 0.30000000447034836
         | 
         | Which feels odd to me.
         | 
         | http://sqlfiddle.com/#!9/2e75e/3
        
           | lsb wrote:
           | You're at 32-bit precision there.
        
         | RedShift1 wrote:
         | God I love that the Internet does things like this. Thanks,
         | this put a smile on my face today.
        
       | lelf wrote:
       | Coq < Compute 0.1.       Toplevel input, characters 8-11:       >
       | Compute 0.1.       >         ^^^       Warning: The constant 0.1
       | is not a binary64 floating-point value. A closest       value
       | 0x1.999999999999ap-4 will be used and unambiguously printed
       | 0.10000000000000001. [inexact-float,parsing]            =
       | 0.10000000000000001            : float
        
       | tmabraham wrote:
       | https://twitter.com/qntm/status/1381346718919356416 Hahahaha!
        
       | OskarS wrote:
       | Not in Raku it doesn't!                   > 1.1 + 2.2         3.3
       | > 1.1 + 2.2 == 3.3         True
       | 
       | EDIT: to be clear: this is not because Raku is magic, it's
       | because Raku defaults to a rational number type for decimal
       | literals, which is arguably a much better choice for a language
       | like Raku.
        
         | espadrine wrote:
         | The same goes in Common Lisp, but for very different reasons:
         | * (= (+ 0.1 0.2) 0.3)       T
         | 
         | In Common Lisp, there is a small epsilon used in floating-point
         | equality: single-float-epsilon. When two numbers are within
         | that delta, they are considered equal.
         | 
         | Meanwhile, in Rakudo, 0.1 is a Rat: a rational number where the
         | numerator and denominator are computed.
         | 
         | You can actually get the same underlying behavior in Common
         | Lisp:                 (= (+ 1/10 2/10) 3/10)
         | 
         | Sadly, not many recent languages have defaults as nice as
         | those. Another example is Julia:                 julia> 1//10 +
         | 2//10 == 3//10       true
         | 
         | IMO, numerical computations should be correct by default, and
         | fast in opt-in.
        
         | ChrisLomont wrote:
         | Because that syntax in raku uses rational type, which fails for
         | many other uses, and by using the syntax most languages use for
         | a floating type, makes it harder to spot these issues, just
         | like here. For example,                   0.1e0 + 0.2e0
         | 
         | yields 0.30000000000000004. Your example also fails
         | 1.1e0 + 2.2e0 == 3.3e0
         | 
         | returns false.
        
           | OskarS wrote:
           | I mean, yeah, if you force the numbers to be floats, then of
           | course it's going to fail. I personally think Raku's way of
           | defaulting to floats is the better way to go for a scripting
           | language like this, and I disagree that "it fails for many
           | other uses". It works just fine (like, it doesn't break if
           | you pass it to sqrt() or whatever), it's just less
           | performant. It's the exact same kind of tradeoff that
           | Python's implicit promotion to big integers make.
        
         | pvorb wrote:
         | Ahem, this is about 0.1 + 0.2. I think Raku also uses IEEE 754
         | double precision floating point numbers for the Num type, no?
         | 
         | Edit: it seems that Raku uses rationals as a default [1], so it
         | doesn't suffer from the same problem by default.
         | 
         | [1]: https://0.30000000000000004.com/#raku
        
           | OskarS wrote:
           | Oh, missed that, it's usually 1.1 + 2.2 in these kinds of
           | discussions.
           | 
           | Yeah, exactly, Raku defaults to a rational number type for
           | these kinds of numbers. I honestly think that is a perfectly
           | fine way to do it, you're not using Raku for high performance
           | stuff anyway. It's not so different from how Python will
           | start to use arbitrarily sized integers if it feels it needs
           | to.
           | 
           | Raku by default will convert it to a float if the denominator
           | gets larger than a 64-bit int, but there's actually a current
           | pull request active that lets you customize that behavior to
           | always keep it as a Rat.
           | 
           | Really interesting language, Raku!
        
       | bassdropvroom wrote:
       | Super interesting. I'd noticed this behaviour previously, but
       | never knew how or why this was the case (and not really bothered
       | to search for it either). Thanks!
        
       | phoe-krk wrote:
       | [2018]
        
       | wodenokoto wrote:
       | In python, 0.3 prints as 0.3, but it's a double, so it should be
       | 0.299999999999999988897769753748434595763683319091796875
       | (according to the article, and the 0.1+0.2 != 0.3 trick also
       | works)
       | 
       | What controls this rounding?
       | 
       | e.g., in an interactive python prompt i get:
       | >>> b = 0.299999999999999988897769753748434595763683319091796875
       | >>> b         0.3
        
         | lifthrasiir wrote:
         | It is the shortest decimal number that converts back to that
         | exact FP number. There are tons of complex algorithms for that
         | [1].
         | 
         | [1] See my past comment for the overview:
         | https://news.ycombinator.com/item?id=26054079
        
           | young_unixer wrote:
           | Isn't that essentially lying to the user?
        
             | lifthrasiir wrote:
             | 0.299999999999999988897769753748434595763683319091796875
             | suggests 54 fractional digits of precision, which is
             | misleading.
             | 
             | 0.29999999999999998 or 0.29999999999999999 are less
             | misleading but wasteful. Remember they are not only visible
             | to users but can be serialized in decimal representations
             | (thanks to, e.g. JSON).
             | 
             | In fact, most uses of FP are essentially lies, providing an
             | illusion of real numbers with a limited storage. It is just
             | a matter of choosing _which_ lie to keep.
        
         | has2k1 wrote:
         | It depends on how many decimal places you are printing
         | >>> f'{b:.54f}'
         | 0.299999999999999988897769753748434595763683319091796875
         | >>> f'{x:.16g}'         0.3         >>> f'{x:.17g}'
         | 0.29999999999999999
        
       | worik wrote:
       | Golly. Surprised by floating point arithmetic?
       | 
       | 1.99999999.... == 2.0
       | 
       | There are limits to computer representation of floating point
       | numbers. Computers are finite state, floating point numbers are
       | not.
       | 
       | sigh
        
         | chrisseaton wrote:
         | > Computers are finite state, floating point numbers are not.
         | 
         | No, floating point numbers _are_ finite state. That's the
         | _whole point_ behind this discussion. There are only so many
         | possible floating point numbers representable in so many bits.
         | 
         | I never understand this confusion - you have finite memory -
         | with this you can only represent a finite set of real numbers.
         | So of course all the real numbers can't be mapped directly.
        
           | caf wrote:
           | I understand the confusion. It occurs when people haven't
           | fully grokked that floating point numbers generally use
           | binary representation, and that the _set of numbers that can
           | be represented with a finite number of decimal digits is
           | distinct from the set of numbers that can be represented with
           | a finite number of binary digits_. People generally know that
           | they can 't write down the decimal value of 1/3 exactly -
           | they just haven't considered that for the same reason you
           | can't write down the binary value of 1/10 exactly either.
           | 
           | This confusion is also helped along by the fact that the
           | input and output of such numbers is generally still done in
           | decimal, often rounded, that both decimal and binary can
           | exactly represent the integers with a finite number of
           | digits, and that the set of numbers exactly representable
           | with in a finite decimal expansion is a superset of those
           | exactly representable in a finite binary expansion (since 2
           | is a factor of 10).
        
         | bhaak wrote:
         | You mean "real numbers".
         | 
         | Floating point numbers are one way of approximating real
         | numbers on computers.
        
           | worik wrote:
           | Yes.
        
       | cratermoon wrote:
       | If you think _that_ is crazy, check out Muller 's Recurrence:
       | https://scipython.com/blog/mullers-recurrence/
        
       | bluenose69 wrote:
       | In R, there are functions for practical equality (to within a
       | tolerance that makes sense on the local machine), e.g.
       | > all.equal(0.1+0.2,0.3)         [1] TRUE
       | 
       | and functions for actual equality, e.g.                   >
       | identical(0.1+0.2,0.3)         [1] FALSE
        
       | zamadatix wrote:
       | I've seen a lot of stuff on getting the shortest representation
       | that is equal to the floating point value back but what about
       | finding the minimum/maximum representation that is equal to a
       | given value?
        
         | kccqzy wrote:
         | That's a rather easier problem in comparison. Just use the
         | nextafter function in the standard library to figure out the
         | next representable number. Then try not to exceed half of the
         | difference using string processing.
        
           | zamadatix wrote:
           | Ah "nextafter" is indeed what I was looking for it just isn't
           | in the JS standard library or Python version I use. Google
           | has plenty examples of the function once you know what it's
           | called though.
           | 
           | Complexity wise that actually seems to give an equally simple
           | "shortest answer" method - nextafter up and down and using
           | text processing find the first digit that changes, see if it
           | can be zero, if not choose the lowest value it can be an
           | increment by one, remove the rest of the string accordingly,
           | and right trim any 0s from the resulting.
        
       | globular-toast wrote:
       | Was the title of this post automatically generated? Why did it
       | turn "0.1 + 0.2" into "0.1 and 0.2"?
        
       | RcouF1uZ4gsC wrote:
       | This is pretty much the same problem as what does 1/3 + 1/3 = in
       | decimal. You are specifying fractions that don't have an exact
       | finite representation in that base (base 10 with 1/3) and (base 2
       | with 0.1 or 1/10).
       | 
       | With proper rounding and I/O these are not generally an issue.
        
       | dec0dedab0de wrote:
       | I think high level languages shouldn't even have floats, unless
       | they're a special type for doing floating point math.
       | 
       | Specifically I'm thinking about python, the literal x.x should be
       | for Decimal and float should have to be imported to be used as an
       | optimization if you need it.
        
       | Black101 wrote:
       | its ok... there are bigger mistakes/bugs at stock brokers
        
       | gravelc wrote:
       | As an aside, just finish qntm's spectacularly good 'There Is No
       | Antimemetics Division". Highly worth a read if you're after some
       | highly original sci-fi.
        
       | [deleted]
        
       | kazinator wrote:
       | This just has to do with printing.                 This is the
       | TXR Lisp interactive listener of TXR 256.       Quit with :quit
       | or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.       TXR
       | works even if the application surface is not free of dirt and
       | grease.       1> (+ 0.1 0.2)       0.3
       | 
       | OK, so then:                 2> (set *print-flo-precision* 17)
       | 17       3> (+ 0.1 0.2)       0.30000000000000004
       | 
       | But:                 4> 0.1       0.10000000000000001       5>
       | 0.2       0.20000000000000001       6> 0.3
       | 0.29999999999999999
       | 
       | I.e. 0.1 isn't exactly 0.1 and 0.2 isn't exactly 0.2 in the first
       | place! The misleading action is to compare the _input_ notation
       | of 0.1 and 0.2 to the _printed output_ of the sum, rather than
       | consistently compare nothing but values printed using the same
       | precision.
       | 
       | The IEEE double format can store 15 decimal digits of precision
       | such that all those decimal digits are recoverable. If we print
       | values to no more than 15 digits, then things look "artificially
       | clean" for situations like (+ 0.1 0.2).
       | 
       | I made *print-flo-precision* have an initial value of 15 for this
       | reason.
       | 
       | The 64 bit double gives us 0.1, 0.2 and 0.3 to 15 digits of
       | precision. If we round at that many digits, we don't see the
       | trailing junk of representational error.
       | 
       | Unfortunately, to 15 digits of precision, the data type gives us
       | two different 0.3's: the 0.299999... one and the 0.3.....04 one.
       | Thus:                 7> (= (+ 0.1 0.2) 0.3)       nil
       | 
       | That's the real kicker; not so much the printing. This
       | representational issue bites you regardless of what precision you
       | print with and is the reason why there are situations in which
       | you cannot compare floating-point values exactly.
        
         | lmilcin wrote:
         | > The misleading action is to compare the input notation of 0.1
         | and 0.2 to the printed output of the sum, rather than
         | consistently compare nothing but values printed using the same
         | precision.
         | 
         | I think the problem is the act of caring for the least
         | significant bits.
         | 
         | If you care for least significant bits of a floating point
         | number it means you are doing something wrong. FP numbers
         | should be treated as approximations.
         | 
         | More specifically, the problem above is assuming that floating
         | point addition is associative to the point of giving you
         | results that you can compare. In floating point order of
         | operations matters for the least significant bits.
         | 
         | FP operations should be treated as incurring inherent error on
         | each operation.
         | 
         | IEEE standard is there to make it easier to do repeatable
         | calculations (for example be able to find regression in your
         | code, compare against another implementation) and for you to be
         | able to reason about the magnitude of the error.
        
       | georgeburdell wrote:
       | I use this as one of my interview questions (in a piece of code
       | where it would run correctly if 0.1 + 0.2 = 0.3). Maybe 1/3 of
       | interviewees recognize the cause, and maybe half of those can
       | actually explain why and how to mitigate it. I work in scientific
       | computing so it's absolutely relevant to my work
        
       ___________________________________________________________________
       (page generated 2021-04-11 23:01 UTC)