[HN Gopher] Pi calculation world record with over 202T digits
       ___________________________________________________________________
        
       Pi calculation world record with over 202T digits
        
       Author : radicality
       Score  : 112 points
       Date   : 2024-07-12 07:41 UTC (3 days ago)
        
 (HTM) web link (www.storagereview.com)
 (TXT) w3m dump (www.storagereview.com)
        
       | Someone wrote:
       | I guess they were sponsored by their hardware manufacturers, on
       | the condition that they mentioned their name once for every
       | trillionth digit of pi they computed.
       | 
       | I can understand that they have to mention them, but I think
       | they're overdoing it.
        
         | ycombinete wrote:
         | The StorageReview Lab Team would never do that.
        
         | lifeisstillgood wrote:
         | Nah this is a hardware review magazine - it's like asking
         | Jeremy Clarkson to stop say "Lamborghini" so often :-)
        
         | netsharc wrote:
         | Only 11 hits, 10 of them in the beginning and end bits, which
         | were probably written by the marketing team (and ChatGPT).
         | 
         | It'd probably be amusing to ask ChatGPT to rewrite the article
         | so that every sentence contains "StorageReview Lab"...
        
       | egberts1 wrote:
       | This new pi value should land us on the precise nanometer on a
       | planetary rock of a sun located some 18 trillion light years
       | away.
       | 
       | More than good enough for a Star Trek transporter targeting
       | system, provided that sufficient power can reach it and able to
       | compensate for planetary orbital speed, orbital curvature,
       | surface axial rate, as well same value set for its solar system
       | pathway around its galaxy, and its galaxy pathway thru its
       | eyewatery cornucopia of galaxies.
       | 
       | But it may not be good enough for precise calculation of field
       | interaction within a large group of elementary particles of
       | quantum physics. Thanks to Heisenburg's Indeterminacy Principle
       | (aka Uncertainty Principle).
        
         | slyall wrote:
         | > This new pi value should land us on the precise nanometer on
         | a planetary rock of a sun located about 18 trillion light years
         | away.
         | 
         | 40 digits or so will get you that...
        
           | hughesjj wrote:
           | Nah, observable universe is only 93B light years in
           | "diameter" at the current "moment"
        
             | theandrewbailey wrote:
             | 93 billion light years is 8.798x10^26 meters[0], so about
             | 35 digits ought to suffice for any given nanometer.
             | 
             | [0] https://www.wolframalpha.com/input?i=93+billion+light+y
             | ears+...
        
               | egberts1 wrote:
               | Now that's a solid win for us math nerds.
        
           | constantcrying wrote:
           | In floating point arithmetic two consecutive operations can
           | have an unbounded error. Just because the precision is good
           | enough for one computation doesn't mean it is good enough for
           | all computations.
        
         | LeoPanthera wrote:
         | This is why Star Trek transporters have "Heisenburg
         | compensators". Everyone knows that. And also that you have to
         | disable them if you want to beam holographic objects off a
         | holodeck.
         | 
         | It's just good science.
        
           | batch12 wrote:
           | Only if you need to trick a sentient AI into thinking it's
           | part of the real world and not in a Russian doll cube of
           | hypervisors.
        
             | _joel wrote:
             | We just need to crack out a Tommy Gun for that.
        
             | 0cf8612b2e1e wrote:
             | I know it was a monster-of-the-week format, but this
             | episode really stuck with me. Created sentient life to
             | never be discussed again. Data is only special in that he
             | has a physical body.
        
               | batch12 wrote:
               | Stuck on a shelf somewhere, oblivious to the fact that
               | it's in a simulated environment. It'd be an interesting
               | Star Trek II type followup when someone finds the cube
               | and plugs in a cable only to have Moriarty escape and
               | find a mobile emitter prototype somewhere on the
               | network.. but I digress..
        
               | LeoPanthera wrote:
               | For what it's worth, Moriarity does show up again in Star
               | Trek Picard, but only as a brief cameo.
        
           | loloquwowndueo wrote:
           | Heisenberg, not Heisenburg.
        
             | egberts1 wrote:
             | I'm turning in my physic card. ( _stare_ at*ground)
        
         | NKosmatos wrote:
         | Perhaps the following statement from NASA will help ;-)
         | 
         | "For JPL's highest accuracy calculations, which are for
         | interplanetary navigation, we use 3.141592653589793" (15
         | digits).
         | 
         | How Many Decimals of Pi Do We Really Need? :
         | https://www.jpl.nasa.gov/edu/news/2016/3/16/how-many-decimal...
        
           | zamadatix wrote:
           | It's a shame they don't mention why they use specifically 15
           | digits (because of doubles?). Would give some satisfaction to
           | why the specific amount after the explanation.
        
             | tomtom1337 wrote:
             | Agreed! "What would happen if we were off by some tiny
             | fraction" is a really interesting question to me!
        
               | gnramires wrote:
               | Something like spaceflight is subject to chaotic forces
               | and unpredictable interactions, so having that many
               | digits (more than 15 decimal) becomes negligible. (For
               | example, a planetary force will probably vary by that
               | much in ways models don't capture, like subtle tidal
               | forces and subtle variations in its orbit, etc.).
               | Navigation usually involves methods of measuring your
               | position and adjusting course in theory needing much less
               | precision.
               | 
               | Simulating physical systems to extremely high precision
               | (e.g. more than double precision) in general seems
               | pointless in most situations because of those effects.
        
               | fwip wrote:
               | Adding on to this - the physical characteristics of the
               | spacecraft itself are also not machined to nearly 15
               | digits of tolerance, so these feedback systems are
               | necessary even if the rest of the universe were perfectly
               | modeled.
        
             | snet0 wrote:
             | I imagine that you'd want to use fixed-point arithmetic
             | when it comes to these things, right? Floating-point
             | numbers are good enough for a lot of things, but the
             | precise thing they're bad at is dealing in high precision
             | at multiple orders of magnitude, which feels like a thing
             | NASA would kinda need.
        
               | mr_mitm wrote:
               | A lot of the parameters that enter their equations are
               | probably measurements, like the gravitational
               | acceleration, properties of some material, and so on. The
               | numerical solutions to their equations have an error that
               | is at least that of the most unprecise parameter, which I
               | can't imagine to be more than four significant digits, so
               | doubles should provide plenty of precision. The error
               | introduced by the numerical algorithms can be controlled.
               | I don't see why you'd need fixed point arithmetic.
        
               | snet0 wrote:
               | Okay yeah, what you're saying seems true.
               | 
               | I guess the GP comment was discussing that, with this new
               | measurement of pi, we now have enough precision (in pi)
               | to reference a point this small on an object this far
               | away. Once you account for all the other uncertainties in
               | referencing that point, as you mentioned, all that
               | precision in one dimension of the measurement is
               | completely meaningless.
               | 
               | It still feels weird that you'd use an arithmetic with
               | guaranteed imprecision in a field like this, but I can
               | definitely see that, as long as you constrain the scales,
               | it's more than enough.
        
               | lanstin wrote:
               | They put in scheduled fix the course burns, as there's a
               | lot of uncertainty outside the math - the fuel burns
               | probably can't be controlled to 5 sig figs, for example.
               | Also, although I have no idea if this matters, N-body
               | orbital mechanics itself is a chaotic system, and there
               | will be times when the math just won't tell you the
               | answer. https://botsin.space/@ThreeBodyBot if you like to
               | see a lot of examples of 3-body orbits. (maybe just in
               | 2d, I'm not sure).
        
               | fwip wrote:
               | Fixed-point also has guaranteed imprecision for many
               | operations, because you only have a finite number of
               | digits after the decimal point.
               | 
               | e.g, with two decimal digits: (2.83 * 0.10) = 0.283,
               | which is stored as 0.28.
        
               | constantcrying wrote:
               | >but the precise thing they're bad at is dealing in high
               | precision at multiple orders of magnitude, which feels
               | like a thing NASA would kinda need.
               | 
               | The precise thing they are good at is dealing with number
               | _in_ a wide range of magnitudes. Where as fixed point
               | numbers can not be used if the magnitudes vary wildly.
               | 
               | You can only use fixed point arithmetic if you _know_
               | that every intermediate calculation you make will take
               | place in a specific range of precision. E.g. your base
               | precision might be millimeters, so a 32 bit fixed point
               | number is exact up to one millimeter, but can at maximum
               | contain a distance of 2^32-1 millimeters, so around 4.3
               | billion millimeters. But again you have to keep in mind
               | that this is the maximum value for _every_ intermediate
               | result. E.g. when calculating the distance between two
               | point in 3D space you need a power of 3, so every value
               | you calculate the power of needs to have a value of less
               | than the third root of 4.3 billion.
               | 
               | This makes fixed point arithmetic very hard to correctly
               | implement and requires a very deep analysis of the
               | system, to make sure that that the arithmetic is correct.
        
               | IshKebab wrote:
               | Probably not - 64 bit float pretty much has enough bits
               | that it wouldn't be an issue even on the scale of the
               | solar system. Even if it was it would be easier just to
               | switch to 128 bit float than deal with fixed point.
        
               | constantcrying wrote:
               | For two operations the floating point error is unbounded.
               | If it ever made sense to carefully analyze a fixed point
               | system it is for manned space flight.
        
               | pclmulqdq wrote:
               | That isn't exactly true. Floating point error is only
               | really unbounded if you hit a cancellation (ie
               | subtracting two big numbers), which you can avoid by
               | doing some algebra.
        
               | constantcrying wrote:
               | That is totally wrong.
               | 
               | You can not in general avoid cancellation, even claiming
               | that is ridiculous. WTF are you even saying.
        
               | fwip wrote:
               | Why not?
        
               | constantcrying wrote:
               | Because if your algorithm contains an "-" and you don't
               | have complete control over the inputs to both sides of
               | that minus, you will have cancellation.
               | 
               | There is no general way to mitigate that, you can use
               | numerically superior algorithms or screen your inputs,
               | but these only help in specific cases. There is no
               | general way to avoid this, every algorithm needs to be
               | treated specifically.
        
               | IshKebab wrote:
               | That's not really related to floating point though.
               | You'll have the same issues for fixed point.
        
               | constantcrying wrote:
               | >That's not really related to floating point though.
               | You'll have the same issues for fixed point.
               | 
               | You don't. "-" is exact for fixed point unless the
               | operation falls outside the range of valid values.
        
               | pclmulqdq wrote:
               | Exact and precise are different ideas. In fixed point,
               | all operations but division are exact. In fixed point,
               | all operations have precision related to the magnitude of
               | the numbers being operated on. You can have a 64-bit
               | fixed point number system that gives you 16 bits of
               | precision on most of your operations.
               | 
               | In floating point, almost every operator (other than
               | subtraction) has precision of the full width of the
               | mantissa minus 0.5 ULPs. All operators are not guaranteed
               | to always be exact, but they are _far more precise_ on
               | average than equivalent operators in fixed point.
               | 
               | Cancellation isn't an issue of _exactness_ , it's an
               | issue of _precision_.
        
               | constantcrying wrote:
               | Sure, but errors for fixed point happen very differently
               | to floating point errors.
               | 
               | E.g. (a-b)*c, which is the common example for
               | cancellation, if a and b are very close, can have an
               | unbounded error compared to the result in the real
               | numbers, in floating point. Since all operations besides
               | "/" are exact in fixed point, no error can be introduced
               | by this operation in fixed point (if all operations are
               | representable).
               | 
               | Claiming that fixed and floating point are suffering the
               | same way is just wrong.
        
               | pclmulqdq wrote:
               | (a - b) in fixed point will have very low _precision_ ,
               | too, and will generally be exact in floating point. The
               | following multiplication by c may or may not be exact in
               | floating point (or in fixed point, mind you -
               | multiplication of two n bit numbers exactly needs 2n bits
               | of precision, which I doubt you will give yourself in
               | fixed point). The "unbounded" error comes in when a and b
               | themselves are not exact, and you have that same
               | exactness problem in fixed point as you do in floating
               | point.
               | 
               | For example, suppose your fixed point format is "the
               | integers" and your floating point format has 6
               | significant digits: if you have real-valued a = 100000.5
               | and b = 100001.9, both number systems will round a to
               | 100001 and b to 100002. In both cases, (b - a) will be 1
               | while (b - a) should be 1.4 if done in the reals. That
               | rounding problem exists in fixed point just as much as in
               | floating point. In both systems, the operation that
               | causes the cancellation is itself an exact calculation,
               | but the issue is that it's not _precise_. Fixed point
               | will just give you 1 in the register while floating point
               | will add a bunch of spurious trailing zeros. Floating
               | point can represent 1.4, though, while fixed point can
               | 't. If a and b were represented exactly (a = 100001 and b
               | = 100002 in the reals), there would be no problem in
               | either number system.
               | 
               | The only times that you get better cancellation behavior
               | are when you have more precision to the initial results,
               | which when comparing double precision float to 64-bit
               | fixed point comes when your operands in fixed point have
               | their MSB at the 53rd position or above. That only
               | happens when your dynamic range is so deeply limited that
               | you can't do much math.
               | 
               | When you are thinking about cancellation numerically,
               | _exact_ is a red herring. _Precise_ is what you want to
               | think about.
        
               | pclmulqdq wrote:
               | There is no generic algorithm to completely prevent
               | cancelation. However, there are a lot of specific little
               | ways you can do algebra to push it around so it doesn't
               | hurt you badly (note that I said "avoid", not "prevent").
               | I would conjecture that the vast majority of numerical
               | systems can be designed that way if you take the time to
               | think about it.
               | 
               | Or you can just use something like herbie that thinks
               | about it for you: https://herbie.uwplse.org/
        
               | constantcrying wrote:
               | >I would conjecture that the vast majority of numerical
               | systems can be designed that way if you take the time to
               | think about it.
               | 
               | Sometimes there are ways to mitigate this, sometimes
               | there aren't. Sometimes you need to precondition,
               | sometimes you need to rearrange, sometimes you need a
               | different algorithm, sometimes you need to normalize,
               | sometimes you need to use a different arithmetic and so
               | on.
               | 
               | For solving _linear systems_ alone, there are definitely
               | thousands of papers dealing with the problems arising
               | from this. For every single algorithm you write and for
               | all data which comes into that algorithm, you need a
               | careful analysis if you want to exclude the potential of
               | significant numerical errors.
               | 
               | Your comment makes it seem like this is a small problem,
               | where you can just look at an algorithm for a time and
               | fix it, this is literally a hundred year research project
               | in numerics.
        
               | pclmulqdq wrote:
               | > Sometimes there are ways to mitigate this, sometimes
               | there aren't. Sometimes you need to precondition,
               | sometimes you need to rearrange, sometimes you need a
               | different algorithm, sometimes you need to normalize,
               | sometimes you need to use a different arithmetic and so
               | on.
               | 
               | > For solving linear systems alone, there are definitely
               | thousands of papers dealing with the problems arising
               | from this. For every single algorithm you write and for
               | all data which comes into that algorithm, you need a
               | careful analysis if you want to exclude the potential of
               | significant numerical errors.
               | 
               | It sounds like we agree that cancellation is avoidable
               | with some analysis, and there are hundreds of techniques
               | you can use to deal with it, but mostly it's the ~5 you
               | listed there. And as you suggest, I don't believe this is
               | nearly as significant a problem in the general case as
               | you think it is. A careful error analysis is possible if
               | you care (and if ever you cared, it would be on a
               | spacecraft), and far easier in floating point than in
               | many other number systems, including fixed point number
               | systems.
               | 
               | Numeric systems that truly fix cancellation are
               | _incredibly big and heavy_ , and cannot usually be used
               | for real-time calculations in a generic form. Fixed point
               | certainly doesn't fix cancellation - it introduces
               | precision loss issues on every operation you do that
               | causes a number to go down in magnitude. It is actually
               | harder to design systems in fixed point that avoid
               | massive precision losses than it is in floating point,
               | and the error analysis is _much_ more substantial.
        
               | constantcrying wrote:
               | >I don't believe this is nearly as significant a problem
               | in the general case as you think it is.
               | 
               | My original comment was about manned space flight in
               | particular. If your application is relatively generic I
               | think it is completely okay, if you are aware of it and
               | mitigate the most pressing issues.
               | 
               | >Numeric systems that truly fix cancellation are
               | incredibly big and heavy, and cannot usually be used for
               | real-time calculations in a generic form.
               | 
               | You can use interval arithmetic, which guarantees that
               | you at least know when cancellation has occurred.
               | Interval arithmetic is fast enough for real time,
               | although it has its own significant drawbacks.
               | 
               | > It is actually harder to design systems in fixed point
               | that avoid massive precision losses than it is in
               | floating point, and the error analysis is much more
               | substantial.
               | 
               | Absolutely. My point was, that a manned space craft,
               | might just be the point to do it.
        
               | pclmulqdq wrote:
               | > My original comment was about manned space flight in
               | particular. If your application is relatively generic I
               | think it is completely okay, if you are aware of it and
               | mitigate the most pressing issues.
               | 
               | Everything we have been talking about relates to space
               | flight. In fact, with humans on board, you can afford to
               | be a lot less precise, because they can work around most
               | numerical issues by hand. The Apollo guidance computers,
               | for example, were prone to occasional instances of gimbal
               | lock and numerical instability, and the astronauts just
               | fixed it.
               | 
               | > You can use interval arithmetic, which guarantees that
               | you at least know when cancellation has occurred.
               | Interval arithmetic is fast enough for real time,
               | although it has its own significant drawbacks.
               | 
               | Interval arithmetic does not prevent cancellation. It's
               | just two floating point calculations, both of which are
               | actually less precise than the one you would do otherwise
               | (you don't use default rounding for interval arithmetic,
               | you round the bottom down and the top up). You do know
               | when things have been canceled, but you know that in a
               | floating point calculation anyway if you have done the
               | error analysis.
               | 
               | My overall point here is that NASA isn't missing anything
               | by using floating point instead of using other weird or
               | exotic arithmetic systems. Double-precision floating
               | point combined with a rudimentary error analysis and some
               | algebra is good enough for pretty much everything, and
               | you may not be able to do better at all with fixed point.
               | Designing fixed point algorithms also depends on a very
               | careful analysis of interval ranges and precisions, and
               | often gets you nothing over just using "double", where
               | the error analysis is _easier_ anyway.
               | 
               | If you need to do better than double, there's also
               | double-double arithmetic for your hard parts, which is a
               | similar speed to interval arithmetic and doubles the
               | precision you get beyond double.
        
             | _a_a_a_ wrote:
             | They do say why. By example.
        
             | pclmulqdq wrote:
             | 16 decimal places (really about 15.9 decimal places) is
             | what you get with double-precision floating point.
             | 
             | Double-double would be about 31 digits, and quad precision
             | would get you 34.
             | 
             | Single-precision gets you a bit more than 7 digits.
        
               | zamadatix wrote:
               | The double representation of pi just so happens to be
               | accurate to precisely 15 digits even though the general
               | amount of precision is slightly higher.
        
           | DrNosferatu wrote:
           | I suppose because that's what
           | 
           | "atan(1) * 4"
           | 
           | casts to double?
           | 
           | - I wonder if this cast is always correct in C [ie.: math.h],
           | no matter the datatype and/or the number base?
        
             | constantcrying wrote:
             | >- I wonder if this cast is always correct in C [ie.:
             | math.h], no matter the datatype and/or the number base?
             | 
             | Floating point arithmetic is deterministic. As long as it
             | is implemented as specified atan(1) _has_ to give the
             | floating point number which is the closest approximation to
             | the real number pi /4 (in the current rounding mode), the
             | multiplication by 4 means that precision can be lost and
             | potentially your result is no longer the closest possible
             | approximation to pi.
        
               | adgjlsfhk1 wrote:
               | this isn't true. the standard only recommends correct
               | rounding, but does not actually set any limits on
               | acceptable error. also, no OS provided libm produces
               | correctly rounded results for all inputs.
        
               | constantcrying wrote:
               | > the standard only recommends correct rounding
               | 
               | What? This is not true at all. The standards specifies
               | adherence to IEEE 754 arithmetic.
               | 
               | You can read the standard here: https://www.open-
               | std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
               | 
               | Page 507 for adherence to number formats. Page 517 for
               | atan adhering to IEEE 754 specification for the functions
               | defined therein, which guarantees best possible results
               | for individual operations.
               | 
               | Any C implementation where atan gives a result which is
               | inconsistent with IEEE 754 specification does not adhere
               | to the standard.
               | 
               | > also, no OS provided libm produces correctly rounded
               | results for all inputs.
               | 
               | Every IEEE 754 conforming library does adhere to the best
               | possible rounding guarantee. If you have any evidence to
               | the contrary that would be a disaster and should be
               | reported to the vendor of that library ASAP.
               | 
               | Can you provide some function and some input which
               | violates the IEEE 754 guarantee together with the
               | specific library and version? Or are you just making
               | stuff up?
        
               | AlotOfReading wrote:
               | In the interests of moving this discussion in a positive
               | direction, the comment you're replying to is correct.
               | IEEE 754 doesn't specify correct rounding except for a
               | small subset of elementary functions. In the 1985
               | version, this was the core +, -, *, /, and sqrt, but it
               | was updated to include a few of the other functions when
               | they were added. arctan is one of those functions which
               | is not always correctly rounded due to the _tablemaker 's
               | dilemma_. If you read the latest standard (2019), they
               | actually cite some of the published literature giving
               | specific worst case examples for functions like arctan.
               | 
               | Even beyond transcendental functions, 754 isn't
               | deterministic in practice because implementations have
               | choices that aren't always equivalent. Using FMA vs
               | separate multiplication and addition leads to different
               | results in real programs, even though both methods are
               | individually deterministic.
        
               | constantcrying wrote:
               | >arctan is one of those functions which is not always
               | correctly rounded due to the tablemaker's dilemma.
               | 
               | But then it doesn't conform to the standard. It is pretty
               | unambiguous on that point.
               | 
               | From Section 9.2:
               | 
               | "A conforming operation shall return results correctly
               | rounded for the applicable rounding direction for all
               | operands in its domain."
               | 
               | I do not see how two conforming implementations can
               | differ in results.
               | 
               | >Using FMA vs separate multiplication and addition leads
               | to different results in real programs, even though both
               | methods are individually deterministic.
               | 
               | Obviously. I never claimed that the arithmetic was
               | invariant under transformations which change floating
               | point operations, but are equivalent for real numbers.
               | That would be ridiculous.
               | 
               | Is there actually an example of two programs performing
               | identical operations under the same environment that give
               | different results where both implementations conform to
               | the standard?
               | 
               | >Even beyond transcendental functions, 754 isn't
               | deterministic in practice because implementations have
               | choices that aren't always equivalent.
               | 
               | Could you give an example? Where are implementations
               | allowed to differ? And are these cases relevant, in the
               | sense that identical operations lead to differing
               | results? Or do they just relate to error handling and
               | signaling.
        
               | Extigy wrote:
               | That section is recommended but not required for a
               | conforming implementation:
               | 
               | > 9. Recommended operations
               | 
               | > Clause 5 completely specifies the operations required
               | for all supported arithmetic formats. This clause
               | specifies additional operations, recommended for all
               | supported arithmetic formats.
               | 
               | Hyperbolic tan is in the list of recommended functions,
               | and yet: https://github.com/numpy/numpy/issues/9187
        
               | constantcrying wrote:
               | >That section is recommended but not required for a
               | conforming implementation:
               | 
               | Who cares? The C standard for math.h requires these
               | functions to be present as specified. They are specified
               | to round correctly, the C standard specifies them to be
               | present as specified, therefore the C standard specifies
               | them as present and correctly rounded. I literally quoted
               | the relevant sections, there are no conforming C
               | specification which give different results.
               | 
               | >Hyperbolic tan is in the list of recommended functions,
               | and yet: https://github.com/numpy/numpy/issues/9187
               | 
               | Any evidence whatsoever that this is caused by two
               | differing implementations of tanh, which BOTH conform to
               | the IEEE 754 standard?
               | 
               | Everyone is free to write their own tanh, it is totally
               | irrelevant what numpy gives, unless there are calls to
               | two standard confirming tanh function which for the same
               | datatype produce different results.
        
               | Extigy wrote:
               | > The C standard for math.h requires these functions to
               | be present as specified. They are specified to round
               | correctly, the C standard specifies them to be present as
               | specified, therefore the C standard specifies them as
               | present and correctly rounded. I literally quoted the
               | relevant sections, there are no conforming C
               | specification which give different results.
               | 
               | Forgive me, but I cannot see that in the document
               | sections you point out. The closest I can see is F.10-3,
               | on page 517, but my reading of that is that it only
               | applies to the Special cases (i.e values in Section
               | 9.2.1), not the full domain.
               | 
               | In fact, my reading of F.10-10 (page 518) suggests that a
               | conforming implementation does not even have to honor the
               | rounding mode.
        
               | AlotOfReading wrote:
               | Feel free to take a look at the relevant glibc page for
               | error bounds: https://www.gnu.org/software/libc/manual/ht
               | ml_node/Errors-in...
               | 
               | I'm not aware of any libm implementations that will
               | guarantee correct rounding across all inputs for all
               | types. I'm aware of a few libm's that will guarantee that
               | for floats (e.g. rlibm:
               | https://people.cs.rutgers.edu/~sn349/rlibm/ ), but these
               | are not common.
        
               | constantcrying wrote:
               | Sure, but this means those libm's aren't implementing
               | IEEE 754.
               | 
               | Genuinely a bit shocked by this.
        
               | AlotOfReading wrote:
               | I don't particularly want to read the standard today to
               | quote line and verse, but it's generally understood in
               | the wider community that correct rounding is not required
               | by 754 outside a small group of core functions where it's
               | practically reasonable to implement. This includes
               | everything from the 754 implementation in your CPU to
               | compiler runtimes. Correct rounding is computationally
               | infeasible without arbitrary precision arithmetic, which
               | is what the major compilers use at compile time. If
               | you're expecting it at any other time, I'm sorry to say
               | that you'll always be disappointed.
        
               | constantcrying wrote:
               | I mean, maybe I am just an insane guy on the internet,
               | but to me "correctly rounded", just sounds a bit
               | different to "the implementor gets to decide, how many
               | correct bits he wants to provide".
        
               | AlotOfReading wrote:
               | We're thankfully in a world these days where all the
               | relevant implementations are sane and reliable for most
               | real usage, but a couple decades back that was very much
               | the practical reality. Intel's x87 instruction set was
               | infamous for this. Transcendentals like fsin would
               | sometimes have fewer than a dozen bits correct and worse,
               | the documentation on it was straight up wrong until Bruce
               | Dawson on the chrome team filed a bug report.
        
               | adgjlsfhk1 wrote:
               | also multiplication by 4 doesn't round since 4 is a power
               | of 2
        
               | constantcrying wrote:
               | That just is not true.
               | 
               | It is a bit shift to the right, so where do the new bits
               | come from? Why would the two new bits be the correct
               | ones?
        
               | dasyatidprime wrote:
               | In binary floating point, 4.0 = 1.0x2^2, so the mantissa
               | of the multiplicand will stay the same (being multiplied
               | by 1.0) and the exponent will be incremented by 2.
               | Scaling by exact integer powers of 2 preserves the
               | relative accuracy of the input so long as you stay in
               | range. The increase in absolute error is inherent to the
               | limited number of mantissa bits and not introduced by any
               | rounding from the multiplication; there are no additional
               | bits.
        
               | constantcrying wrote:
               | Who cares?
               | 
               | This is about the approximation to pi _not_ the
               | approximation to float(atan(1))*4, it is exact (but
               | irrelevant) for the later, for the former you loose two
               | bits, so you have a 25% chance of correctly rounding
               | towards pi.
        
           | kens wrote:
           | That NASA article kind of misses the point. NASA uses 15
           | digits for pi because that's the default and it is enough
           | accuracy for them. The interesting question is why is that
           | the default. That goes back to the Intel 8087 chip, the
           | floating-point coprocessor for the IBM PC. A double-precision
           | real in the 8087 provided ~15 digits of accuracy, because
           | that's the way Berkeley floating-point expert William Kahan
           | designed its number representation. This representation was
           | standardized and became the IEEE 754 floating point standard
           | that almost everyone uses now.
           | 
           | By the way, the first Ariane 5 launch blew up because of
           | floating point error, specifically an overflow when
           | converting a 64-bit float to an int. So be careful with
           | floats!
        
         | kens wrote:
         | That's a nice visual, but completely wrong. You're
         | underestimating the accuracy by the absurd amount of roughly
         | 10^160000000000000.
        
           | onlyrealcuzzo wrote:
           | That's 10^(1.6*10^14) for anyone who can't read that many 0s.
        
           | aaron695 wrote:
           | > amount of roughly 10^160000000000000.
           | 
           | You're also underestimating the accuracy by the absurd amount
           | of roughly 10^202000000000000 ;)
           | 
           | You need ~ zero of the digits of the calculated pi to do OPs
           | calculation.
           | 
           | [edit] My brains melting, I think I'm wrong and you are
           | underestimating the underestimation of the accuracy by the
           | absurd amount of roughly 10^42000000000000. OP is
           | underestimating by 10^202000000000000.
        
             | kens wrote:
             | Yes, your edit is correct.
        
         | 0x1ceb00da wrote:
         | > This new pi value should land us on the precise nanometer on
         | a planetary rock of a sun located some 18 trillion light years
         | away.
         | 
         | What does this mean?
        
       | gizajob wrote:
       | And on only 2400 watts too. Impressive.
        
         | voxadam wrote:
         | Watts are a measure of instantaneous power, wouldn't the number
         | of Watt-hours (or kW[?]h) be more interesting in this context?
        
           | roflmaostc wrote:
           | 2400 Watts times 85 days
        
             | euroderf wrote:
             | About 4900 kWh. A few hundred bucks of EL ? Maybe more than
             | a grand ?
             | 
             | And prices-per-digit so low we're practically _giving_ them
             | away !
        
           | gizajob wrote:
           | Interesting in that it's not a Cray or a mainframe, chugging
           | megawatts for weeks on end to do the same thing.
        
       | theginger wrote:
       | If we are all living in a simulation there may have been an
       | extremely stressed cosmic sys admin racing to make sure we did
       | not get into an overflow situation.
        
         | Throw83839 wrote:
         | "Cosmic sys admin" can terminate any misbehaving "process" that
         | consumes too many resources. Matrix will protect itself, even
         | if it would take a murder!
        
           | malux85 wrote:
           | "even if" is generous, I am kill -9 crazy if stability is
           | even slightly threatened
        
           | stingraycharles wrote:
           | Unfortunately looking at the job requirements for a "Cosmic
           | sys admin", it looks like the universe was written in PHP
           | after all.
           | 
           | https://www.cosmicdevelopment.com/careers/system-admin/
        
         | throwaway211 wrote:
         | Pi's a trick that was invented to keep us busy.
         | 
         | That or integers were and we should have an natural number of
         | fingers and toes.
        
           | yarg wrote:
           | The natural numbers are a subset of the integers.
           | 
           | Either just the positives or the positives and zero.
        
           | smitty1e wrote:
           | _Pi_ and _e_ and the rest of the transcendentals are just
           | there to remind the mortals of their finite nature.
        
         | smitty1e wrote:
         | Reality is continuous. A simulation is discrete. The simulation
         | hypothesis seems less plausible than many religious argument
         | one could proffer.
        
           | karmakurtisaani wrote:
           | Reality is discrete tho. Check out Planck length.
           | 
           | The simulation hypothesis is ridiculous in many other ways of
           | course.
        
             | messe wrote:
             | Reality, as best we understand it, is very much not
             | discrete, and space is not split into chunks at the Planck
             | length.
             | 
             | A discrete space-time would likely mean observable Lorentz-
             | violations.
        
               | Lichtso wrote:
               | Discrete spacetime does not necessitate a regular and
               | nicely ordered tesselation, it could be pure chaos or be
               | non-geometric.
               | 
               | Before we thought that energy, mass and matter were
               | completely continuous and that turned out to be wrong.
        
             | omnicognate wrote:
             | The Planck length has no physical significance. It's just a
             | unit of length, chosen for convenience as part of the
             | system of Planck units.
             | 
             | The Planck scale is the scale of lengths/times/etc that are
             | around 1 in Planck units. This happens to be the scale
             | roughly around which the quantum effects of gravity become
             | significant. Since we do not have an accepted quantum
             | theory of gravity, it's therefore the scale at which we
             | cease to have an accepted physical theory.
             | 
             | There is no evidence to suggest that space is discrete at
             | that scale. It's just the scale at which we have no
             | accepted theory, and AFAIK no evidence to evaluate such a
             | theory.
        
               | Lichtso wrote:
               | > The Planck length has no physical significance > There
               | is no evidence to suggest that space is discrete at that
               | scale
               | 
               | Bekenstein bound and "planckian discreteness".
               | 
               | Reality is most likely not composed of a regular
               | tesselation of simple geometric shapes such as a
               | cartesian voxel grid as that would introduce massive
               | anisotropy. But, there is still theoretic evidence
               | suggesting that spacetime is indeed discrete at the
               | lowest level.
        
               | omnicognate wrote:
               | Interesting stuff, and thanks - I didn't know about the
               | Bekenstein Bound. I was referring to empirical evidence,
               | though. AFAICS these are theoretical musings that, while
               | they have some validity as far as they apply to the
               | predictions of existing established theories, have to be
               | considered entirely speculative in the quantum gravity
               | regime where we don't have an accepted theory to base
               | such reasoning on.
        
               | Lichtso wrote:
               | > empirical evidence
               | 
               | That is going to be a though one, at least direct
               | measurements are pretty much ruled out if you consider
               | that the plank length is about a quadrillion (yes, really
               | 10^15) times smaller than the classical electron radius.
               | 
               | So, we will have to settle for indirect evidence and
               | theoretical results.
        
             | rybosworld wrote:
             | It's a misconception that the Planck length represents the
             | smallest possible length.
        
               | karmakurtisaani wrote:
               | Indeed, thanks for the correction! However, it would
               | probably be enough for any simulation to be good enough.
        
           | causal wrote:
           | That's making assumptions about how cosmic sim tech works.
        
         | islon wrote:
         | They just keep generating random numbers from the universe's
         | entropy source. It's quite simple actually.
        
           | wiz21c wrote:
           | > the universe's entropy source
           | 
           | What's the python package to get that ? :-)
        
           | complaintdept wrote:
           | If pi eventually starts behaving deterministically we'll know
           | we've exhausted /dev/random.
        
         | complaintdept wrote:
         | 202 trillion digits is impossible to for one person to observe
         | directly anyway, so no resources consumed.
        
           | gshubert17 wrote:
           | Written or printed out at 3 digits per centimeter, 202
           | trillion digits would make a string about 670 million
           | kilometers long -- or about the distance between Earth and
           | Jupiter.
        
       | RamblingCTO wrote:
       | As pi never repeats itself, that also means that every piece of
       | conceivable information (music, movies, texts) is in there,
       | encoded. So as we have so many pieces of pi now, we could create
       | a file sharing system that's not based on sharing the data, but
       | the position of a piece of the file in pi. That would be kinda
       | funny
        
         | sammex wrote:
         | Would the index number actually be smaller than the actual
         | data?
        
           | psychoslave wrote:
           | You need both index and length, I guess. If concatenating
           | both value is not enough to gain sufficient size shrink, you
           | can always prefix a "number of times still needed to
           | recursively de-index (repeat,start-point-index,size)
           | concatenated triplets", and repeat until you match a desired
           | size or lower.
           | 
           | I don't know if there would be any logical issue with this
           | approach. The only _logistical_ difficulty I can figure out
           | is computing enough decimals and search the pattern in it,
           | but I guess that such a voluminous pre-computed approximation
           | can greatly help.
        
             | waldrews wrote:
             | No invertible function can map every non-negative integer
             | to a lower or equal non-negative integer (no perfect
             | compression), but you can have functions that compress
             | everything we care about at the cost of increasing the size
             | of things we don't care about. So the recursive de-indexing
             | strategy has to sometimes fail and increase the cost (once
             | you account for storing the prefix).
        
               | psychoslave wrote:
               | Is there some inductive proof of that? Or is that some
               | conjuncture?
               | 
               | Actually any resources related to that point could be fun
               | to explore
        
               | waldrews wrote:
               | It's a classic application of the pigeonhole principle,
               | the first on in this list:
               | 
               | https://en.wikipedia.org/wiki/Pigeonhole_principle#Uses_a
               | nd_...
        
           | waldrews wrote:
           | It would average the same size as the actual data. Treating
           | the pi bit sequence as random bits, and ignoring overlap
           | effects, the probability that a given n bit sequence is the
           | one you want is 1/2^n, so you need to try on average 2^n
           | sequences to find the one you want, so the index to find it
           | is typically of length n, up to some second order effects
           | having to do with expectation of a log not being the log of
           | an expectation.
        
         | Moosturm wrote:
         | https://github.com/philipl/pifs
        
         | maxmouchet wrote:
         | https://news.ycombinator.com/item?id=8018818 and
         | https://github.com/philipl/pifs :-)
        
         | IsTom wrote:
         | There are many ways in which a number might not never repeat
         | itself, but not contain all sequences (e.g. never use a
         | specific digit). What you want is normal numbers and pi is not
         | proven to be one (though probably it is).
        
         | voytec wrote:
         | > As pi never repeats itself, that also means that every piece
         | of conceivable information (music, movies, texts) is in there,
         | encoded.
         | 
         | You reminded me of this Person of Interest clip:
         | https://www.youtube.com/watch?v=fXTRcsxG7IQ
        
         | A_D_E_P_T wrote:
         | > _every piece of conceivable information (music, movies,
         | texts) is in there, encoded_
         | 
         | Borges wrote a famous short story, "The Library of Babel,"
         | about a library where:
         | 
         | "... each book contains four hundred ten pages; each page,
         | forty lines; each line, approximately eighty black letters.
         | There are also letters on the front cover of each book; these
         | letters neither indicate nor prefigure what the pages inside
         | will say.
         | 
         | "There are twenty-five orthographic symbols. That discovery
         | enabled mankind, three hundred years ago, to formulate a
         | general theory of the Library and thereby satisfactorily
         | resolve the riddle that no conjecture had been able to divine--
         | the formless and chaotic nature of virtually all books. . .
         | 
         | "Some five hundred years ago, the chief of one of the upper
         | hexagons came across a book as jumbled as all the others, but
         | containing almost two pages of homogeneous lines. He showed his
         | find to a traveling decipherer, who told him the lines were
         | written in Portuguese; others said it was Yiddish. Within the
         | century experts had determined what the language actually was:
         | a Samoyed-Lithuanian dialect of Guarani, with inflections from
         | classical Arabic. The content was also determined: the
         | rudiments of combinatory analysis, illustrated with examples of
         | endlessly repeating variations. These examples allowed a
         | librarian of genius to discover the fundamental law of the
         | Library. This philosopher observed that all books, however
         | different from one another they might be, consist of identical
         | elements: the space, the period, the comma, and the twenty-two
         | letters of the alphabet. He also posited a fact which all
         | travelers have since confirmed: In all the Library, there are
         | no two identical books. From those incontrovertible premises,
         | the librarian deduced that the Library is "total"--perfect,
         | complete, and whole--and that its bookshelves contain all
         | possible combinations of the twenty-two orthographic symbols (a
         | number which, though unimaginably vast, is not infinite)--that
         | is, all that is able to be expressed, in every language."
         | 
         | I've done the (simple) math on this -- in fact I'm writing a
         | short book on the philosophy of mathematics where it's of
         | passing importance -- and the library contains some 26^1312000
         | books, which makes 202T look like a very small number.
         | 
         | So though everything you describe is encoded in Pi (assuming Pi
         | is infinite and normal) we're a long, long way away from having
         | useful things encoded therein...
         | 
         | Also, an infinite and normal Pi absolutely repeats itself, and
         | in fact repeats itself infinitely many times.
        
           | WillAdams wrote:
           | And for an amusing example of this see:
           | 
           | https://www.piday.org/find-birthday-in-pi/
        
             | NeoTar wrote:
             | I'm not sure why, but that website is beautifully broken
             | for me
             | 
             | - it asked for my birthday (e.g. 25th Feb 1986) using a day
             | / month / year form
             | 
             | - then converted to the m/dd/yy form (i.e. a string 22586),
             | 
             | - found that string in Pi,
             | 
             | - _forgot_ my birthday and messed up displaying that
             | somehow when converting back - saying that it found my
             | birthday of 22  / 5 / 86
        
           | no_news_is wrote:
           | You might be interested in the online version:
           | 
           | https://libraryofbabel.info/
           | 
           | I just submitted a sub-page of that site, which has some
           | discussion that touches more on the layout of the library as
           | described by Borges:
           | https://news.ycombinator.com/item?id=40970841
        
         | NooneAtAll3 wrote:
         | > As pi never repeats itself, that also means that every piece
         | of conceivable information (music, movies, texts) is in there,
         | encoded.
         | 
         | may I interest you in the difference between *irrational*
         | numbers and *normal* numbers?
         | 
         | look at https://en.wikipedia.org/wiki/Liouville_number - no
         | repeats, but minuscule "contained information"
        
         | euroderf wrote:
         | > every piece of conceivable information (music, movies, texts)
         | is in there, encoded.
         | 
         | So that means that if we give a roomful of infinite monkeys an
         | infinite number of hand-cranked calculators and an infinite
         | amount of time, they will, as they calculate an infinite number
         | of digits of pi, also reproduce the complete works of
         | Shakespeare et al.
        
           | _joel wrote:
           | and then do it all again, but backwards.
        
         | mkl wrote:
         | > As pi never repeats itself, that also means that every piece
         | of conceivable information (music, movies, texts) is in there,
         | encoded.
         | 
         | This is true for normal numbers [1], but is definitely not true
         | for all non-repeating (irrational) numbers. Pi has not been
         | proven to be normal. There are many non-repeating numbers that
         | are not normal, for example 0.101001000100001...
         | 
         | Storing the index into pi for a file would usually take
         | something like as much space as just storing the file, and
         | storing or calculating enough digits to use that index would be
         | impossible with the technology of today (or even probably the
         | next century).
         | 
         | [1] https://en.wikipedia.org/wiki/Normal_number
        
           | tombert wrote:
           | It's conjectured to be normal isn't it? I know it hasn't been
           | proven yet, and I cannot seem to find where I read this, but
           | I thought there was at least statistical evidence indicating
           | that it's _probably_ normal.
        
             | adgjlsfhk1 wrote:
             | 100% of real numbers are normal, so that's pretty strong
             | statistical evidence
        
               | hn_throwaway_99 wrote:
               | What? No they're not, e.g. no rational numbers are
               | normal, and they are real.
        
               | GraphEnthusiast wrote:
               | The rational numbers make up "zero percent" of the real
               | numbers. It's a little hard to properly explain without
               | assuming a degree in math, since the proper way to treat
               | this requires measure theoretic probability (formally,
               | the rationals have measure zero in the reals for the
               | "standard" measure).
               | 
               | The short version is that the size of the reals is a
               | "bigger infinity" than the size of the rationals, so they
               | effectively have 'zero weight'.
               | 
               | Reference (very technical):
               | https://math.stackexchange.com/questions/508217/showing-
               | that...
        
               | hn_throwaway_99 wrote:
               | But then the original implication, "100% of real numbers
               | are normal, so that's pretty strong statistical
               | evidence", still doesn't make any sense, as it's
               | essentially using "100%" to imply "strong statistical
               | evidence" that the rationals don't exist, which obviously
               | doesn't follow.
        
               | staunton wrote:
               | > still doesn't make any sense
               | 
               | Right. I'm pretty sure actually that it was a joke...
        
               | mhink wrote:
               | I got the impression that the comment was a bit tongue-
               | in-cheek.
               | 
               | The joke lies in the fact that saying "100% of real
               | numbers" isn't *technically* the same thing as saying
               | "all real numbers", because there's not really a good way
               | to define a meaning for "100%" that lets you exclude
               | rational numbers (or any other countable subset of the
               | reals) and get something other than 100%.
        
         | sxv wrote:
         | Isn't 202TB (for comparison) way too small to contain every
         | permutation of information? That filesize wouldn't even be able
         | to store a film enthusiast's collection?
        
         | _fizz_buzz_ wrote:
         | This is not necessarily true. Pi might not repeat but it could
         | at some point - for example - not contain the digit 3 anymore
         | (or something like that). It would never repeat, but still not
         | have all conceivable information.
        
           | pilaf wrote:
           | But the number 3 is there just because we decide to calculate
           | digits in base 10. We could encode Pi in binary instead, and
           | since it doesn't repeat it necessarily will never be a point
           | where there will never be another 1 or a 0, right?
        
             | bubblyworld wrote:
             | That's true - you can quite easily prove that an eventually
             | constant sequence of decimals codes for a rational number.
             | 
             | But it's also true that pi may not contain every _possible_
             | sequence of decimals, no matter what base you pick. Like
             | the Riemann hypothesis, it seems very likely and people
             | have checked a lot of statistics, but nobody has proven it
             | beyond a (mathematical) shadow of doubt.
        
             | _fizz_buzz_ wrote:
             | Obviously, it was just an example to illustrate what a non-
             | periodic number could look like that doesn't contain all
             | possible permutations. If the number never contains the
             | digit 3 in base 10 it will also not contain all possible
             | permutations in all other bases.
        
         | criddell wrote:
         | > every piece of conceivable information is in there
         | 
         | Wouldn't the encoded information have to have a finite length?
         | For example, pi doesn't contain e, does it?
        
           | tzs wrote:
           | > For example, pi doesn't contain e, does it?
           | 
           | Assuming we are only interested in base 10 and that pi
           | contains e means that at some point in the sequence of
           | decimal digits of pi (3, 1, 4, 1, 5, 9, 2, ...) there is the
           | sequence of decimal digits of e (2, 7, 1, 8, 2, 8, ...), then
           | I believe that question is currently unanswered.
           | 
           | Pi would contain e if and only if there are positive integers
           | n and m such that 10^n pi - m = e, or equivalently 10^n pi -
           | e = m.
           | 
           | We generally don't know if combinations of e and pi of the
           | form a pi + b e where a and b are algebraic are rational or
           | not.
           | 
           | Even the simple pi + e is beyond current mathematics. All
           | we've got there is that at least one of pi + e and pi e must
           | be irrational. We know that because both pi and e are zeros
           | of the polynomial (x-pi)(x-e) = x^2 - (pi+e)x + pi e. If both
           | pi+e and pi e were rational then that polynomial would have
           | rational coefficients, and the roots of a non-zero polynomial
           | with rational coefficients are algebraic (that is in fact the
           | definition of an algebraic number) and both pi and e are
           | known to not be algebraic.
        
         | worewood wrote:
         | The sad thing is that the index would take just as much space
         | as the data itself, because in average you can expect to find a
         | n-bit string at the 2^n position.
        
         | Zambyte wrote:
         | https://news.ycombinator.com/item?id=36357466
        
         | 2OEH8eoCRo0 wrote:
         | Does pi contain pi?
        
           | schoen wrote:
           | It does, starting right at the beginning!
        
         | sundry_gecko wrote:
         | Reminds me of a scene of Finch teaching in Person of Interest.
         | 
         | https://m.youtube.com/watch?v=yGmYCfWyVAM
        
         | constantcrying wrote:
         | >As pi never repeats itself, that also means that every piece
         | of conceivable information (music, movies, texts) is in there,
         | encoded.
         | 
         | It is somewhat shocking that again and again this logical
         | fallacy comes up. Why do people think that this is true? It
         | doesn't even sound true.
        
           | hkhanna wrote:
           | Isn't it a property of infinity? If pi goes on infinitely
           | without repeating itself, every possible combination of
           | numbers appears somewhere in pi.
           | 
           | It's sort of like the idea that if the universe is infinitely
           | big and mass and energy are randomly distributed throughout
           | the universe, then an exact copy of you on an exact copy of
           | Earth is out there somewhere.
           | 
           | This property of infinity has always fascinated me, so I'm
           | very curious for where the logical fallacy might be.
        
             | n2d4 wrote:
             | Not necessarily. The number 1.01001000100001000001... never
             | repeats itself, yet most other numbers can never be found
             | in it.
             | 
             | A number that contains all other numbers infinitely many
             | times (uniformly) would be called normal, but no one has
             | managed to prove this for pi yet. In fact, no one even
             | managed to prove that pi doesn't contain only 0s and 1s
             | like the above after the X-th digit.
        
             | constantcrying wrote:
             | >Isn't it a property of infinity? If pi goes on infinitely
             | without repeating itself, every possible combination of
             | numbers appears somewhere in pi.
             | 
             | No. Example: 0.1011011101111011111... does never repeat,
             | yet there is no 2 in there, neither is there 00 in there.
        
               | onion2k wrote:
               | The fact you can't encode arbitrary data in a structured-
               | but-irrational number doesn't mean you can't encode data
               | in a 'random' irrational number.
               | 
               | The question is really 'Does every series of numbers of
               | arbitrary finite length appear in pi?' I can't answer
               | that because I'm not a mathematician, but I also can't
               | dismiss it, because I'm not a mathematician. It sounds
               | like a fair question to me.
        
               | constantcrying wrote:
               | >I can't answer that because I'm not a mathematician
               | 
               | So what? Mathematicians can't answer it either. It is an
               | open question and because it is an open question claiming
               | it is or isn't true makes no sense.
               | 
               | >The fact you can't encode arbitrary data in a
               | structured-but-irrational number doesn't mean you can't
               | encode data in a 'random' irrational number.
               | 
               | You can not encode data in a random number. If it is
               | random you can not encode data in it, because it is
               | random. I am not sure what you are saying here.
               | 
               | I demonstrated that numbers where the digits go on
               | forever and never repeat exist, which don't contain every
               | single possible substring of digits. Therefore we know
               | that pi can either be such or a number or it is not, the
               | answer to that is not known. Definitely it is not a
               | property of pi being infinitely long and never repeating.
        
               | onion2k wrote:
               | _You can not encode data in a random number_
               | 
               | That's why I put random in quotes. Pi is not a random
               | number. You _can_ encode data in it eg find a place that
               | matches your data and give people the offset. That 's not
               | very helpful for most things though.
        
               | fragmede wrote:
               | just index on the number of ones. Ex 0.10110 there are
               | two ones in a row, so reference those two ones to be the
               | number two. For 00, flip it and refer to the same pair of
               | ones.
        
               | constantcrying wrote:
               | That is totally missing the point. Of course for every
               | number there is an encoding that contains all pieces of
               | information.
               | 
               | That obviously applies to 0.00... = 0 as well, it
               | contains 0, then 00, then 000 and so on. So every number
               | and therefore every piece of information is contained in
               | 0 as well, given the right encoding. Obviously if you can
               | choose the encoding after choosing the number all number
               | "contain" all information. That is very uninteresting
               | though and totally misses the point.
        
             | dist-epoch wrote:
             | Most physicists don't believe that infinity can actually
             | exist in the universe.
             | 
             | Put another way, the program which searches those works of
             | art in the digits of pi will never finish (for a
             | sufficiently complex work of art). And if it never
             | finishes, does it actually exist?
        
               | constantcrying wrote:
               | >Most physicists don't believe that infinity can actually
               | exist in the universe.
               | 
               | Citation needed.
               | 
               | Believing in real numbers requires you to believe in far
               | more than infinity. How many physicists reject real
               | numbers?
        
               | n_plus_1_acc wrote:
               | Yeah, last time I checked physicists use many integrals,
               | derivatives and nablas.
        
               | staunton wrote:
               | That's a completely different issue. Using math to solve
               | physics problems deals with physical _models_. Models are
               | imperfect and what kinds of math they use is completely
               | separate from asking  "does infinity exist in our actual
               | universe".
               | 
               | To answer that question, you would have to dismiss with
               | experimental evidence all models people can come up with
               | that try to explain the universe without "infinities".
               | It's neither completely clear what that would mean, nor
               | whether it's even in principle possible to determine
               | experimentally (it's also most likely completely
               | irrelevant to any practical purpose).
        
             | andrewla wrote:
             | More trivially, there are an infinite number of even
             | numbers, and they do not repeat, yet they do not contain a
             | single odd number.
        
           | mywittyname wrote:
           | The thinking is inspired by the Infinite Monkeys Theorem.
           | Which does have an easy-to-understand mathematical proof (and
           | the criticisms of said proof are more difficult to grasp).
        
         | its_ethan wrote:
         | https://libraryofbabel.info/
         | 
         | you might find this to be pretty cool. It's similar to what
         | you're describing. Whoever made it has an algorithm where you
         | can look up "real" strings of text and it'll show you where in
         | the library it exists. you can also just browse at random, but
         | that doesn't really show you anything interesting (as you would
         | expect given it's all random).
        
           | tetris11 wrote:
           | the hashing algorithm should encode some locality, but
           | disappointingly doesn't...
           | 
           | ...and can't because there is no original corpus that the
           | locality hashing algorithm can use as a basis
        
       | waldrews wrote:
       | Why go through all that effort, when it's just tau/2.
        
       | maxmouchet wrote:
       | Nice achievement but always a bit disappointing that those
       | records are based on throwing more money at the problem, rather
       | than new theoretical grounds, or software improvements (IIRC
       | y-cruncher is not open source).
        
       | vasco wrote:
       | This reminds me I've been curious to know what Hackernews people
       | walk around with. I memorized 18 digits of Pi in highschool that
       | I still remember and sometimes use as a bad joke here and there.
       | But curious how many digits people here walk around remembering,
       | specially if you aren't into competitively doing it (which I
       | found out later is a thing).
        
         | heinrich5991 wrote:
         | 18 including the leading 3.
        
         | contravariant wrote:
         | I remember up to around 3.14159265, which is roughly what most
         | cheap calculators will show you.
        
           | Am4TIfIsER0ppos wrote:
           | Likewise. I just memorized what my 10 digit calculator showed
           | me in high school.
        
         | rossant wrote:
         | I memorized 20 digits when I was 11 or 12. I still remember
         | them.
        
         | GeoAtreides wrote:
         | There is a streamer who memorised 1000 digits of pi, on stream,
         | from scratch, in 11 hours. She used the mind palace method.
         | 
         | Here's the whole vod, if curious:
         | https://www.youtube.com/watch?v=TZqTIXCrC3g
        
           | wruza wrote:
           | Surprised it's not Matt, although he'd probably misremember
           | one digit anyway.
        
         | goodcanadian wrote:
         | I was up to 40 or 50 at one point (around age 18), but I don't
         | think I can get past 20, now.
        
         | mkl wrote:
         | 100 + a few now (since age ~15). I briefly knew 400, but didn't
         | put in the practice to keep it (some sections of those later
         | 100s are still there, but I can't access them as I'm missing
         | some digits in between). It takes about 40min. to get 100 new
         | digits into my head (just straight repetition, paying attention
         | to the sounds and patterns in rhythmic groups). Keeping them
         | long-term requires a lot of time spent on spaced repetition. I
         | run through the first 100 digits just a few times a year now,
         | and it's all still easily there.
        
         | randunel wrote:
         | I memorised the sqrt of 2 in highschool to impress the teacher.
         | The first 32 digits eventually became part of my passwords, but
         | significantly reduced nowadays because I kept running into too
         | many length restricted invalidations.
        
         | generic92034 wrote:
         | So, how about memorizing the last yet known 100 digits of Pi?
         | Of course you would have to learn a new sequence from time to
         | time. :)
        
         | ryandvm wrote:
         | I just remember the integer part.
        
         | brianjlogan wrote:
         | I memorized and forgot 50 digits of Pi after doing some memory
         | training game. Was a cool bar trick but ultimately lost my
         | memory palace discipline. I might be able to pull out some
         | percentage of accuracy if I focused but it feels fairly
         | pointless.
        
         | BenjiWiebe wrote:
         | I memorized 61 or a bit more digits probably 15 years ago. 61
         | digits stuck and I have them yet today.
         | 
         | My younger brother was competing with me. He knows ~160 digits.
         | 
         | No special memory tricks - just repeatedly reading/reciting
         | until they stuck.
        
       | scoot wrote:
       | I'm curious what the longest string of digits of PI embedded in
       | that is (and what the most efficient algorithm for finding it
       | would be).
        
       | Am4TIfIsER0ppos wrote:
       | Did you write down all the digits? Excellent.
        
       | ptsneves wrote:
       | Are there well interesting formulas where PI is potentiated? This
       | would affect the importance of the precision of PI's digits if i
       | understand correctly.
        
       | boringg wrote:
       | Have they empirically checked that there is no pattern
       | reproduction or just going by the proof? I imagine since this is
       | the largest we've calculated to you could also empirically check
       | the proof to confirm accuracy.
        
         | hughesjj wrote:
         | There's a series expansion for arbitrary digits of pi (but in
         | hex) that you could sample
         | 
         | https://en.m.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80...
        
       | m3kw9 wrote:
       | How do they verify that because even if it's as simple as
       | addition, you may have hardware anomalies that appears, just look
       | at divisions and floating points weirdness on certain hardware
        
         | meroes wrote:
         | It ran on their machine
        
         | NeoTar wrote:
         | There are techniques which allow you to, for instance,
         | calculate the 'n'th digit of Pi in base 10 (or binary, or hex,
         | etc.) These are generally more computationally/memory expensive
         | than the techniques used to calculate as many digits of Pi as
         | possible.
         | 
         | So, you run your big calculation to get all XXX trillion digits
         | on one machine, and run a completely different calculation to
         | check, say, 1000 of those digits. If all match it's a pretty
         | convincing argument that the calculation is correct.
        
       | chj wrote:
       | Is it possible to find a certain encoded formula of PI inside the
       | PI digits itself, given its length?
        
         | lqet wrote:
         | Do you mean in the computed 202T digits of Pi, on in in the
         | infinite sequence of Pi digits? In case of the latter: sure,
         | probably, if Pi is normal, any finite sequence of digits is
         | contained _somewhere_ in Pi, so it would contain (in encoded
         | form) any closed formula and any program, book, or piece of
         | music ever written.
         | 
         | E: As the comments have pointed out, this requires the
         | conjecture that Pi is normal to be true, was has not been
         | proven or disproven yet.
        
           | jagthebeetle wrote:
           | I thought this wasn't actually mathematically established -
           | the related property would be whether or not pi is normal.
        
           | Q_is_4_Quantum wrote:
           | Is this known to be true? Its obviously not true for
           | arbitrary irrational numbers
        
       | WalterBright wrote:
       | My roommate in college had, while in high school, gone for a
       | Guinness World Record memorizing the number of digits in pi. He
       | memorized them out to 800 or so, then discovered another had
       | memorized it to thousands, so he gave up.
       | 
       | In college, he figured out how to write a program to compute an
       | arbitrary number of the digits of Pi. I asked him how did he know
       | it was correct? He said "just look at it. The digits are right!"
       | 
       | We were limited in the use of the campus PDP-10 by a CPU time
       | allotment per semester. He was planning to blow his allotment
       | computing pi, he figured he could compute it to 15,000 digits or
       | so. At the end of the term, he fired it up to run overnight.
       | 
       | The PDP-10 crashed sometime in the early morning, and his
       | allotment was used up and had no results! He just laughed and
       | gave up the quest.
       | 
       | Later on, Caltech lifted the limits on PDP-10 usage. Which was a
       | good thing, because Empire consumed a lot of CPU resources :-/
        
         | smokel wrote:
         | The limits on memorizing digits of pi have been lifted to great
         | heights by Akira Haraguchi [1].
         | 
         | [1] https://en.wikipedia.org/wiki/Akira_Haraguchi
        
           | patriksvensson wrote:
           | Interesting: _Despite Haraguchi 's efforts and detailed
           | documentation, the Guinness World Records have not yet
           | accepted any of his records set._
        
             | NobodyNada wrote:
             | Guiness is not an "authentic" record-keeping organization,
             | in that they largely don't attempt to maintain accurate
             | records of "the most X" and "the fastest Y". Rather, their
             | business model is primarily based on marketing and
             | publicity stunts: a company makes a ridiculously large
             | pizza or whatever, and pays a very large amount of money to
             | have Guinness "verify" their record for biggest pizza. A
             | Guinness world record is just Guinness's record; it's
             | commonly different from the true world record.
             | 
             | https://en.wikipedia.org/wiki/Guinness_World_Records#Change
             | _...
        
               | kens wrote:
               | Guinness claims that the IBM System/360 (1964) was the
               | first computer to use integrated circuits. I've tried
               | unsuccessfully to convince them that they are wrong. The
               | Texas Instruments Semiconductor Network Computer (1961)
               | was the first, a prototype system, followed by multiple
               | aerospace computers earlier than the System/360.
               | Moreover, the System/360 used hybrid SLT modules which
               | weren't even integrated circuits, so it's not even a
               | contender. Maybe you could argue that the System/360 was
               | the first commercial, production computer to use high-
               | density modular electronics, but that's a lot of extra
               | adjectives.
               | 
               | https://www.guinnessworldrecords.com/world-records/first-
               | com...
        
         | tpurves wrote:
         | Empire! What a classic. Burned many cpu cycles of my Atari ST
         | computer on that.
        
         | alsetmusic wrote:
         | Funny timing. I was just musing about a middle-school classmate
         | who endeavored to calculate as far as she could by hand and
         | thinking how dated the idea was an hour ago. This was in the
         | 90s, so it's not as though we didn't have computers. They just
         | hadn't reached mass-adoption in households.
        
         | alejohausner wrote:
         | I memorized pi to 100 places in high school, but it didn't get
         | me any dates. The girls were more impressed by the jocks.
         | 
         | I should have attended a more geeky high school.
        
           | mywittyname wrote:
           | Being well rounded is important.
        
             | throwup238 wrote:
             | Who needs a significant other when they've got a hundred
             | significant digits?
        
           | mrspuratic wrote:
           | It was not rational, in hindsight.
        
         | throwup238 wrote:
         | _> Later on, Caltech lifted the limits on PDP-10 usage. Which
         | was a good thing, because Empire consumed a lot of CPU
         | resources :- /_
         | 
         | Knowing Caltech, there's a 50:50 chance that PDP is still
         | running somewhere, torturing some poor postdoc in the
         | astrophysics department because no one wants to upgrade it or
         | port some old numerical code to a modern architecture.
        
       | gigatexal wrote:
       | Anyone else thinking a few nodes of those servers with their
       | drool-worthy 60TB ssds in a HA environment would be really really
       | awesome to play with and run infra on so I could go back to not
       | worrying about cloud spend and just focus on cool and fun stuff?
        
       | fritzo wrote:
       | Any signs of Sagan's conjectured graffiti yet? E.g. pictures of
       | circles?
        
       | 725686 wrote:
       | But why? Serious question. I'm sure something interesting/useful
       | might come out of it, and even if it doesn't just go for it, but
       | is there any mathematical truth that can be gleaned by
       | calculating pi to more and more digits?
        
         | xyst wrote:
         | Like cryptography algos use prime numbers. Probably something
         | out there that uses pi digits.
        
         | sweezyjeezy wrote:
         | Not particularly, only thing I can think of is if we analysed
         | it and saw there was some bias in the digits, but no one
         | expects that (pi should be a 'normal number' [1]). I think they
         | did it as a flex of their hardware.
         | 
         | [1] https://en.wikipedia.org/wiki/Normal_number
        
           | robxorb wrote:
           | Isn't there a non-zero chance that given an infinite number
           | of digits, the probability of finding repeats of pi, each a
           | bit longer, increases until a perfect, endless repeat of pi
           | will eventually be found thus nullifying pi's own infinity?
        
             | Antipode wrote:
             | The chance of that loop repeating forever is 0.
        
               | robxorb wrote:
               | Infinity has entered the chat.
        
               | kevinventullo wrote:
               | In this case, the infinite sum
               | 0+0+0+0+...
               | 
               | is still zero.
        
             | djkorchi wrote:
             | No, because it would create a contradiction. If a "perfect,
             | endless repeat of pi" were eventually found (say, starting
             | at the nth digit), then you can construct a rational number
             | (a fraction with an integer numerator and denominator) that
             | precisely matches it. However, pi is provably irrational,
             | meaning no such pair of integers exists. That produces a
             | contradiction, so the initial assumption that a "perfect,
             | endless repeat of pi" exists cannot be true.
        
               | robxorb wrote:
               | Yes and that contradiction is already present in my
               | premise which is the point. Pi, if an infinite stream of
               | digits and with the prime characteristic it is
               | normal/random, will, at some point include itself, by
               | chance. Unless, not random...
               | 
               | This applies to every normal, "irrational" number, the
               | name with which I massively agree, because the only way
               | they can be not purely random suggests they are
               | compressible further and so they have to be purely
               | random, and thus... can't be.
               | 
               | It is a completely irrational concept, thinking
               | rationally.
        
               | linearrust wrote:
               | > Pi, if an infinite stream of digits and with the prime
               | characteristic it is normal/random, will, at some point
               | include itself, by chance.
               | 
               | What you are essentially saying is that pi =
               | 3.14....pi...........
               | 
               | If that was the case, wouldn't it mean that the digits of
               | pi are not countably infinite but instead is a continuum.
               | So you wouldn't be able to put the digits of pi in one to
               | one correspondence with natural numbers. But obviously we
               | can so shouldn't our default be to assume our premise was
               | wrong?
               | 
               | > It is a completely irrational concept, thinking
               | rationally.
               | 
               | It is definitely interesting to think about.
        
         | golergka wrote:
         | As a general principle, when you do something very complex just
         | for fun, you usually learn a lot of useful stuff along the way.
        
         | hn_throwaway_99 wrote:
         | The work was done by a team at "Storage Review", and the
         | article talks a lot about how the were exercising the
         | capabilities of their processor, memory, and storage
         | architecture.
        
         | panarky wrote:
         | Isn't everyone as curious as I am about what the pi-
         | quadrillionth digit of pi will turn out to be?
         | 
         | The suspense is killing me.
        
           | onion2k wrote:
           | It's a 4.
        
       | xyst wrote:
       | 202 trillion digits of pi. Maybe someday I will be able to use
       | this exact calculation to do something useful. Just need 61TB of
       | memory or disk space to store the constant
        
       | philip1209 wrote:
       | It's way to dismiss this as useless. But, I feel like doing this
       | work must have yielded some interesting second-order tools or
       | realizations.
       | 
       | So, does anybody know what interesting discoveries have come out
       | of this process, besides a more precise Pi?
        
       | 2o35j2o3j wrote:
       | They need to keep going. I heard there's a surprise about 10^20
       | digits in.
        
       | SloopJon wrote:
       | I was shopping for a storage array last year, and was impressed
       | by the IBM FlashSystem, which can stuff about 1.8PB (raw) into a
       | 4U enclosure using forty-eight 38.4TB FlashCore modules.
       | 
       | StorageReview's server is a different beast, but it's kind of
       | amazing that it gets similar capacity in only 2U.
        
       | bitslayer wrote:
       | Meh. About as useless as blockchain, I guess.
        
       ___________________________________________________________________
       (page generated 2024-07-15 23:00 UTC)