[HN Gopher] The Lost Art of Logarithms
       ___________________________________________________________________
        
       The Lost Art of Logarithms
        
       Author : ozanonay
       Score  : 178 points
       Date   : 2025-03-13 19:05 UTC (3 hours ago)
        
 (HTM) web link (www.lostartoflogarithms.com)
 (TXT) w3m dump (www.lostartoflogarithms.com)
        
       | tombert wrote:
       | I started using LMAX Disruptor for some projects. One quirk with
       | Disruptor is that the queue size always has to be an exponent of
       | two.
       | 
       | I wanted to make sure that I always have at least enough room for
       | any size and I didn't want to manually compute, so I wrote this:
       | var actualSize = Double.valueOf(Math.pow(2,
       | Math.ceil(Math.log(approxSize) / Math.log(2)))).intValue();
       | 
       | A bit much for a single line, but just using some basic log rules
       | in order to the correct exponent. I learned all this in high
       | school, but some of my coworkers thought I was using this
       | amazing, arcane bit of math that had never been seen before. I
       | guess they never use log outside of Big-O notation.
        
         | mananaysiempre wrote:
         | This is perfectly usable, of course, but I'd write
         | var actualSize = Integer.highestOneBit(approxSize - 1) << 1;
         | 
         | purely to avoid involving the horrors that live beneath the
         | humble pow() and log().
         | 
         | (Integer.highestOneBit, also known as "isolate leftmost bit",
         | "most significant one", or the like, essentially has to be a
         | primitive to be efficient, unlike its counterpart for the
         | lowest bit, x&-x. The actual CPU instruction is usually closer
         | to Integer.numberOfLeadingZeros, but that's just a bitshift
         | away.)
        
           | tombert wrote:
           | That's pretty cool; I didn't even consider doing any cool
           | bitwise arithmetic.
           | 
           | I didn't particularly care about performance or anything for
           | this particular case, since it runs exactly once at the start
           | of the app just to initiate the Disruptor.
        
           | markrages wrote:
           | shouldn't it be
           | 
           | var actualSize = 1 << Integer.highestOneBit(approxSize - 1);
           | 
           | ?
        
             | mananaysiempre wrote:
             | Nope. I don't know why the Java folks decided not to use
             | the fairly standard verb "isolate" for this method, but
             | that's what it is[1]:
             | 
             | > public static int highestOneBit(int i)
             | 
             | > Returns an int value with at most a single one-bit, in
             | the position of the highest-order ("leftmost") one-bit in
             | the specified int value. Returns zero if the specified
             | value has no one-bits in its two's complement binary
             | representation, that is, if it is equal to zero.
             | 
             | There isn't a straight floor(log2(*)) as far as I can tell,
             | only Integer.numberOfLeadingZeros, and turning the former
             | into the latter is annoying enough[2] that I wouldn't
             | prefer it here.
             | 
             | [1] https://docs.oracle.com/javase/8/docs/api/java/lang/Int
             | eger....
             | 
             | [2] https://docs.oracle.com/javase/8/docs/api/java/lang/Int
             | eger....
        
               | markrages wrote:
               | Thanks. What a weird API.
        
           | layer8 wrote:
           | The above gives an incorrect result for _approxSize_ = 1
           | (namely 0). The following works (for values up to 2^30, of
           | course):                   var actualSize = Integer.MIN_VALUE
           | >>> Integer.numberOfLeadingZeros(approxSize - 1) - 1;
           | 
           | Or, if you want 0 to map to 0 instead of to 1:
           | var actualSize = Integer.signum(approxSize) *
           | Integer.MIN_VALUE >>> Integer.numberOfLeadingZeros(approxSize
           | - 1) - 1;
           | 
           | Of course, you could also use a variation of:
           | var actualSize = Math.min(1,
           | Math.Integer.highestOneBit(approxSize - 1) << 1);
        
         | ajsnigrutin wrote:
         | Just shift the size to the right 1 bit and count the shifts
         | until the value turns to zero, and you'll get your number 2
         | exponent for the size :)
        
       | kenjackson wrote:
       | Notation for writing log has always bugged me. Like I feel like
       | it should be more like <10>^<527> which would be the log base 10
       | of 527. That's not it, but something. The current notation just
       | doesn't feel quite right.
        
         | Jtsummers wrote:
         | https://mathcenter.oxford.emory.edu/site/math108/logs/
         | 
         | Some people have suggested the "triangle of power".
        
           | awesome_dude wrote:
           | The Triangle of power explanation of logarithms is what
           | really got me across logs.
           | 
           | It wasn't until seeing the triangle and having the
           | relationships explained that I had any clue about logarithms,
           | up until then logs had been some archaic number that meant
           | nothing to me.
           | 
           | Because of the triangle of power, I now rock up to B and B+
           | Trees and calculate the number of disc accesses each will
           | require in the worst case, depending on the number of values
           | in each block (eg, log2(n), log50(n) and log100(n))
        
             | cafeinux wrote:
             | Ironically, that notation, which I just discovered,
             | confuses me more than anything else. Logs clicked for me
             | when someone online said "amongst all the definitions we
             | have for logs, the most useful and less taught is that
             | log() is just a power". At that exact instant, it's like if
             | years of arcane and foreign language just disappeared in
             | front of my eyes to leave only obviousness and poetry.
        
               | awesome_dude wrote:
               | That is not without humour :)
               | 
               | I don't understand the comment about it just being a
               | power, but, for me, knowing that it's filling in the
               | third vertice on the triangle with exponents at the top,
               | and n on the other is what makes it work for me - I now
               | know in my head when I am looking for the log of n, I am
               | looking for the exponent that would turn the log into n.
               | 
               | I don't go looking for the exact log, I only look for
               | whole numbers when I am calculating the value in my mind.
               | 
               | But it makes sense when I am looking for the log2 of 8 to
               | know that the answer is "what exponent will make 2 into
               | 8"? and that's "3"
        
           | the__alchemist wrote:
           | Yea; this owns. I used it for my own stuff.
        
           | nh23423fefe wrote:
           | I never understand why anyone thinks its good notation. The
           | layout means nothing and lets you infer nothing because
           | exponentiation isn't a 2d planar geometric operation. All the
           | information and rules are contained in the idea "exponentials
           | map the additive reals to the multiplicative reals".
           | 
           | The notation conveys no information at all, and provides no
           | means of proving anything, and even the notation for function
           | composition is worse.
           | 
           | Given the operator pow : R^2->R there are 2 possible
           | inverses. root and log
           | 
           | root isn't even interesting. its just pow with the 2nd
           | argument precomposed with reciprocal. root x b = pow x 1/b
        
           | majkinetor wrote:
           | https://www.youtube.com/watch?v=sULa9Lc4pck
        
           | empath75 wrote:
           | I think it's good for explanation purposes, but not actually
           | great as notation, especially for operations that are so
           | common, too much room for ambiguity, especially when writing
           | quickly.
        
           | kenjackson wrote:
           | I've never seen this before, and I LOVE it! I'm a new
           | advocate!
        
         | thaumasiotes wrote:
         | Well, currently exponentiation has a superscript exponent to
         | the right of the base, and logs have a subscript base to the
         | left of the exponent. They're already very similar to your
         | example, but they also include the word "log".
        
       | Animats wrote:
       | Is this the same author who wrote Win32 API books?
        
         | Jtsummers wrote:
         | Yes. https://www.lostartoflogarithms.com/author/
        
         | ohgr wrote:
         | Yes the office door stop as it was known as at our place. Top
         | book though just faded in utility and no one had the heart to
         | dispose of it because of the good memories.
        
           | Dwedit wrote:
           | Good book indeed, just that I wouldn't use a book to look up
           | Win32 API functions.
        
             | ohgr wrote:
             | It used to be the only way!
        
             | werdnapk wrote:
             | Oh, the hours I'd spend browsing through tech books back in
             | the day... good times.
        
               | hughw wrote:
               | Yeah, a good tech bookstore was a golden way to spend an
               | hour.
        
       | hyperopt wrote:
       | Charles Petzold wrote one of my favorite books - "Code: The
       | Hidden Language of Computer Hardware and Software". Very excited
       | to see how this turns out and thanks for giving some of this
       | knowledge away for free!
        
         | tocs3 wrote:
         | I would also recommend the "NAND to Tetris" book. Covers much
         | the same ground (as I remember things anyway) but is a hands on
         | approach. I enjoyed Code also though and is worth a look for
         | those interested.
        
       | tmoertel wrote:
       | Here's an logarithmic fact that I've made use of frequently:
       | 
       | If _X_ is a random variable having a uniform distribution between
       | zero and one, then -ln( _X_ )/ _l_ has an exponential
       | distribution with rate _l_.
       | 
       | This relationship comes in handy when, for example, you want to
       | draw weighted random samples. Or generating event times for
       | simulations.
        
         | zwnow wrote:
         | How long do I have to study math to understand this?
        
           | whereismyacc wrote:
           | to understand what they said, or to understand a proof of why
           | it would be true?
           | 
           | any stats class would be enough to understand what they said
        
             | pc86 wrote:
             | I know what all these words mean, it "makes sense" to me in
             | the sense that I read it and I think "ok.." but I wouldn't
             | have the slightest idea how to use this to get weighted
             | random samples or "generate event times."
             | 
             | So I guess I "understand it" in the sense that it doesn't
             | sound like a foreign language, but I can't apply it in any
             | meaningful way.
        
           | pvg wrote:
           | Depends where you're starting from but from highschoolish
           | maths you can probably sort this out in a few hours or days.
        
           | jrussino wrote:
           | I love that you asked this, but I think it's not _quite_ the
           | right question.
           | 
           | I've been wishing for years that someone would maintain a
           | "dependency graph" for mathematical concepts. I think Khan
           | Academy tried to do something like this at one point but took
           | it down a long time ago. As long as we're far enough from the
           | bleeding-edge of research topics, I feel like maths is the
           | one field where this might be possible to do really well. You
           | should be able to point at something you don't understand and
           | trace backwards in a very granular fashion until you hit a
           | concept you already know/understand, and then learn what you
           | need to fill in the gap!
        
             | pvg wrote:
             | That's more-or-less the purpose of
             | https://mathworld.wolfram.com/ or at least, it's
             | significantly better at it than, say, wikipedia.
        
             | elevation wrote:
             | I had wanted to build a website like this as a hobby,
             | except not limited strictly to mathematical concepts. I
             | wanted to build a graph of the skills/concepts you'd need
             | to understand electrical impedance, fix an air conditioning
             | system, bake bread reliably.
             | 
             | One source of information could be course syllabi, which
             | describe a progression of topics and their prerequisites.
             | 
             | Rather than be the authority on what concepts must precede
             | others, I envisioned making a system that could represent
             | different educational approaches: not every educator agrees
             | that Calculus I ought to be a prerequisite for studying
             | Physics I. Not all bread recipes use yeast.
             | 
             | I had a hard time finding a good domain for this effort.
             | "Tree of knowledge" dot TLD was taken.
        
           | bgnn wrote:
           | Depends on how much you practiced high school math. It's not
           | hard but we forget it without practice.
        
           | carabiner wrote:
           | Ask chatgpt to explain it to you like you're 18. It made it
           | really easy to understand.
        
           | bobbylarrybobby wrote:
           | Unless I'm missing something, this can just be directly
           | verified, no "understanding" necessary. All you need to know
           | is that probability distributions can be characterized by
           | their probability density function (PDF).
           | 
           | If Y=-ln(X)/lambda, then P(Y<a) = P(-ln(X)/lambda<a) =
           | P(X>exp(-lambda a)) = 1-exp(-lambda a).
           | 
           | And if Z is exponential with rate parameter lambda, then
           | P(Z<a) = P(lambda exp(-lambda t)<a) = integral from 0 to a of
           | lambda exp(-lambda t)dt, which can be directly computed to be
           | 1-exp(-lambda a).
           | 
           | They have the same PDF, so they're the same distribution.
        
             | timeinput wrote:
             | I mean if starting from scratch that seems like many years
             | in most western education systems to get to probability,
             | logarithms, exponentiation.
             | 
             | I would say If you knew 2+2=4, and not much else you're
             | years away from 'understanding', if you know ln(exp(y)) =
             | y, and P(x>0.5) = 0.5 for a uniform distribution on [0, 1)
             | then you don't need any additional understanding.
             | 
             | I would bet the GP comment is somewhere inbetween the two
             | extremes, but I think a random sampling of the population
             | would likely result in people generally not knowing the log
             | / exponentiation relation, or anything about the uniform
             | distribution.
        
               | zwnow wrote:
               | Yea got many answers and I dont understand a single one.
               | Good thing you barely need math in programming.
        
           | tmoertel wrote:
           | Most textbooks on probability have some discussion of the
           | relationships between the various distributions commonly in
           | use. If you just want a quick overview, I found John D.
           | Cook's diagram to be handy:
           | 
           | https://www.johndcook.com/blog/distribution_chart/
        
           | dynm wrote:
           | Possibly unhelpful answer: Arguably none! This is presented
           | as a surprising fact, but you could easily argue that this is
           | the proper _definition_ of an exponential distribution. If
           | you do
           | 
           | x = -log(rand())/lambda
           | 
           | a bunch a times, that comes from _something_ , right? Well,
           | let's call that something an exponential distribution.
           | 
           | From this perspective, the thing that actually needs math is
           | finding the density function of the exponential distribution.
           | (For that, in theory you just need calc 101 and probability
           | 101.)
        
           | coliveira wrote:
           | If you study calculus and introduction to probability theory,
           | then you're ready to learn this. So the answer is about 2
           | years after high school.
        
           | dfawcus wrote:
           | Not long. We were taught logs, and use of log tables, at
           | Middle School. So probably around about age 11.
           | 
           | I also vaguely recall a couple of lessons where we went over
           | Napier's Bones, and they had us produce equivalents on pieces
           | of paper to cut out and move around.
           | 
           | I believe I still have my school day log tables around
           | somewhere. I'd just have to practice for 1/2 hr to remind
           | myself how to use them. That said, they did have usage
           | instructions in the last few pages.
        
             | zwnow wrote:
             | Look im 30, most people I know have forgotten all of school
             | math long ago, me included. Entry barrier too big now.
        
               | dfawcus wrote:
               | Having scanned through his book, it makes it seem overly
               | complex.
               | 
               | At middle school, this was taught after having only done
               | simply arithmetic and learning about fractions (rational
               | numbers) in primary school, then decimal fractions in
               | middle school.
               | 
               | The use of logs from the tables was simply a set of basic
               | rules for how to apply them to a few scenarios. I can't
               | recall if it covered trig with those tables, but I doubt
               | it.
               | 
               | I learnt as a child between 10 (when I started middle
               | school), and say around 12 at most. I've forgotten the
               | use, but vaguely recall some way of representing negative
               | numbers in the table (n-bar, with a bar above the digit).
               | 
               | I'm way over 30. I never used log tables after high
               | school, and have forgotten the rules for usage, but
               | recall it didn't take long to learn the first time. *
               | 
               | However for simple uses (multiplication and division) I'd
               | expect I'd be able to pick it up again in at most a weeks
               | worth of practice. It would be made a lot easier now by
               | being able to compare and check calculations with a
               | computer or pocket calculator.
               | 
               | I'd expect any adult able to program a computer to also
               | be able to pick it up in a similar period, or at most a
               | month.
               | 
               | Remember we used to teach this to kids, and expect them
               | to be able to pick it up (if not be accurate in
               | application) in under a weeks worth of lessons.
               | 
               | * Note I didn't even know how to do long multiplication
               | when I learnt, as due to political interference with
               | teaching curriculum, I'd not been taught at primary
               | school.
        
           | wanderingmind wrote:
           | All you need is plot -log(x) for x between 0 and 1 and you
           | will see that log(x) transforms a uniform line into an
           | exponential decay (towards 1). Its being said in a fancy way.
           | This is also the origin of the log likelihood loss function
           | in ML
        
         | pash wrote:
         | The general version of this is called _inverse transform
         | sampling_ [0], which uses the fact that for the cdf _F_ of any
         | random variable _X_ the random variable _Y = F(X)_ has a
         | standard uniform distribution [1]. Since every cdf increases
         | monotonically on the unit interval, every cdf is invertible
         | [2]. So apply the inverse cdf to both sides of the previous
         | equation and you get _F^-1(Y) = X_ is distributed like _X_.
         | 
         | Sampling from a standard uniform distribution and then using
         | the inverse transform is the commonest way of generating random
         | numbers from an arbitrary distribution.
         | 
         | 0. https://en.m.wikipedia.org/wiki/Inverse_transform_sampling
         | 
         | 1.
         | https://en.m.wikipedia.org/wiki/Probability_integral_transfo...
         | 
         | 2. Not every cdf is one-to-one, however, so you may need a
         | generalized inverse.
        
           | tmoertel wrote:
           | For the particular case of the exponential distribution we
           | can go further. By taking advantage of the theory of Poisson
           | processes, we can take samples using a parallel algorithm. It
           | even has a surprisingly succinct SQL translation:
           | SELECT *         FROM Population         WHERE weight > 0
           | ORDER BY -LN(1.0 - RANDOM()) / weight         LIMIT 100  --
           | Sample size.
           | 
           | Notice our exponentially distributed random variable on
           | prominent display in the ORDER BY clause.
           | 
           | If you're curious, I explore this algorithm and the theory
           | behind it in
           | https://blog.moertel.com/posts/2024-08-23-sampling-with-
           | sql....
        
       | inasio wrote:
       | There used to be practical value to be able to do some basic back
       | of the envelope log calculations in your head (no calculators,
       | this was how you did fast multiplications/divisions or
       | exponents). There's a story in Feyman's Surely you're joking book
       | about Los Alamos scientists doing speed competitions for mental
       | log calculations
        
       | NoMoreNicksLeft wrote:
       | If the author is in here, thank you. Been looking for a text for
       | my daughter on the subject. This might just fit the bill. If
       | you're just the linker, then thank you Ozanonay.
        
       | inasio wrote:
       | (I'm sure this is in the book) John Napier, the father of
       | logarithms (the N in ln), basically had a sweatshop of human
       | calculators making log tables over something like 20 years -
       | critical for celestial navigation. There was a huge price
       | attached to the person that developed a method to safely navigate
       | across the oceans, also lead to the invention of the pocket watch
        
         | dekhn wrote:
         | isn't the n in ln "natural" ("logarithm natural")?
        
       | dekhn wrote:
       | I learned the multiplication using addition and a lookup table in
       | a class taught by Huffman (of Huffman compression fame). You
       | weren't allowed to use a calculator on the test.
       | 
       | But my absolute favorite trick is base conversions,
       | https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:...
       | with some practice you can do approximate base conversions (power
       | to 2 to power of 10 or e) in your head
        
       | dkislyuk wrote:
       | I found that looking at the original motivation of logarithms has
       | been more elucidating than the way the topic is presented in
       | grade-school. Thinking through the functional form that can solve
       | the multiplication problem that Napier was facing (how to
       | simplify multiplying large astronomical observations), f(ab) =
       | f(a) + f(b), and why that leads to a unique family of functions,
       | resonates a lot better with me for why logarithms show up
       | everywhere. This is in contrast to teaching them as the inverse
       | of the exponential function, which was not how the concept was
       | discussed until Euler. In fact, I think learning about
       | mathematics in this way is more fun -- what original problem was
       | the author trying to solve, and what tools were available to them
       | at the time?
        
         | cauliflower2718 wrote:
         | This follows directly from the fact that exp(x+y)=exp(x)exp(y).
        
           | dkislyuk wrote:
           | Yes, but such a property was not available to Napier, and
           | from a teaching perspective, it requires understanding
           | exponentials and their characterizations first. Starting from
           | the original problem of how to simplify large multiplications
           | seems like a more grounded way to introduce the concept.
        
         | saulpw wrote:
         | I think this should be front and center. To that end I propose
         | "magnitude notation"[0] (and I don't think we should use the
         | word logarithm, which sounds like advanced math and turns
         | people away from the basic concept, which does make math easier
         | and more fun).
         | 
         | https://saul.pw/mag
        
         | agumonkey wrote:
         | I often wonder about this. I also believe that mathematical
         | pedagogy strive to attract people that are very smart and think
         | in the abstract like euler, and not operationally, meaning they
         | will get it intuitively.
         | 
         | For other people, you need to swim in the original problem for
         | a while to see the light.
        
         | malshe wrote:
         | We used logarithms routinely for large multiplications,
         | divisions, etc. in 11th and 12th grade. No calculators were
         | allowed. This was in India.
        
         | meta_ai_x wrote:
         | I actually prefer the straightforward log is an inverse of
         | exponents. It's more intuitive that way because I automatically
         | can understand 10^2 * 10^3 = 10^5. Hence if you are using log
         | tables, addition makes sense. I didn't need an essay to explain
         | that.
         | 
         | Take logs, add 2 + 3 = 5 and then raise it back to get 10^5.
        
         | analog31 wrote:
         | This would be an interesting thing to study: How many different
         | ways people learned about logarithms, and how they generally
         | fared in math. I learned about logarithms by seeing my dad use
         | his slide rule, and studying stock charts, which tended to be
         | semi-logarithmic.
        
       | adornKey wrote:
       | The Logarithmic derivative is also something that is surprisingly
       | fundamental.
       | 
       | (ln(f))' = f'/f
       | 
       | In function theory you use it all the time. But people rarely
       | notice, that it related to a logarithm.
       | 
       | Also the functions that have nice logarithmic derivative are a
       | lot more interesting than expected. Nature is full of Gompertz
       | functions. Once you're familiar with it, you see it everywhere.
        
       | sva_ wrote:
       | > Charles Petzold
       | 
       | Haven't heard that name in a while. For me he's the WinApi guy -
       | learned a lot from him when I first started programming.
        
       | ashwinsundar wrote:
       | As with all my websites, The Lost Art of Logarithms has no ads,
       | no popups, no cookies, no paywall, no registration, and no
       | notifications. The website doesn't track its visitors or even
       | accumulate statistics. I don't like to encounter ads and other
       | annoyances when I visit a new website, and anecdotal evidence
       | suggests that many others prefer a web experience free of ads and
       | popups. But my motivation could also be interpreted as selfish
       | rather than altruistic: By not implementing ads, popups, cookies,
       | or a paywall, I've made constructing the website much easier for
       | me. I can concentrate on the content itself rather than erecting
       | barriers to the content.
       | 
       | Love this approach
        
       | hughw wrote:
       | I feel frustrated that we cannot conceive of numbers like 10^80
       | (atoms in the universe) or 10^4000 (number configurations for a
       | system with 4000 variables having 10 states each). Maybe there
       | are superbrains out there in the universe that can do so.
        
         | crazygringo wrote:
         | I guess you have to define what you mean by "conceive".
         | 
         | I'm not sure you can even conceive a number like 1,000, if
         | you're talking about holding an intuitive visual understanding
         | in your mind at once.
         | 
         | Like, I can easily see 100 in my mind's eye as a 10x10 grid of
         | circles. Even if I don't see each one clearly, I have a good
         | sense of the 10 on each edge and the way it fills in. But ask
         | me to imagine 10 of those side-by-side to make 1,000, and I
         | don't think I can. Once I imagine the 10 groups, each one is
         | just a square simplification, rather than any individual pieces
         | within.
         | 
         | But I'm totally familiar with 1,000 as a concept I can multiply
         | and divide with, and I can do math with 10^80 as well. And I
         | can do so fairly "intuitively" as well -- it's just all the
         | numbers up to 80 digits long. Even 4,000 digits fits on a
         | single page of a book.
        
           | hughw wrote:
           | My first cut at conceiving is to answer "how long would it
           | take a really fast computer to count to that number". The
           | answer for 10^4000 is still something like 10^3978 years. So,
           | a still inconceivable time.
           | 
           | (100 tera-ops computer) [edited to correct calculation)
        
             | crazygringo wrote:
             | But the length of time it takes a modern computer to count
             | to 10 or 1,000 is perhaps inconceivably _small_ by your
             | metric, no? Your idea arbitrarily selects numbers around 2
             | billion as being conceivable, at least for a single core on
             | my MacBook.
             | 
             | But my question isn't what makes 10^4000 inconceivable --
             | my question is what makes 10^4000 any _less_ conceivable
             | than 1000. To me, they 're both firmly in the realm of
             | abstractions we can reason about using the same types of
             | mathematical methods. They're both qualitatively different
             | from numbers like 5 or 10 which are undoubtedly
             | "conceivable".
        
               | hughw wrote:
               | _the length of time it takes a modern computer to count
               | to 10 or 1,000 is perhaps inconceivably small by your
               | metric_
               | 
               | But I'd just call it "instantaneous" and something I
               | experience frequently. Whereas 10^3978 years is beyond
               | experience and even imagination.
        
               | crazygringo wrote:
               | > _But I 'd just call it "instantaneous" and something I
               | experience frequently._
               | 
               | Then since you're OK with all smaller numbers being
               | equally "instantaneous", then 1/10^4000 seconds is
               | instantaneous too. Add up enough of those to make a
               | second, and you can conceive of your previously
               | inconceivable number! :)
               | 
               | Of course, that will seem silly. I'm just illustrating I
               | don't think there are any grounds for claiming
               | exponentially _large_ numbers are _in_ conceivable, but
               | exponentially _small_ numbers are somehow _conceivable_.
               | They 're just as far from 1, multiplicatively, no matter
               | which direction you go in.
        
         | MisterTea wrote:
         | To me it's hard because we don't come across these magnitudes
         | on a daily basis. Grains of sand on a beach is an off the top
         | of my head way to visualize such quantities. But yeah, numbers
         | that big, are so big that we have nothing to compare them with.
        
         | kqr wrote:
         | I feel like I have a much better grasp of those numbers since I
         | started using logarithms for mental maths:
         | https://entropicthoughts.com/learning-some-logarithms
        
       | kqr wrote:
       | I can strongly recommend memorising some logarithms for use in
       | mental maths. It's given me powers I did not expect to have!
       | Here's what I wrote about it when I started:
       | https://entropicthoughts.com/learning-some-logarithms
        
       | aquafox wrote:
       | Interesting insight why applying a log transform often makes data
       | normally distributed: Pretty much all laws of nature are
       | multiplications (F=m _a, P_ V=n _R_ T, etc). If you start with
       | i.i.d random variables and multiply them, you get log-normal data
       | by virtue of the central limit theorem (because multiplications
       | are additions on a log scale; and the CLT is also somewhat robust
       | to non iid-ness). Thinking of data as the result of a lot of
       | multiplications of influential factors, we thus get a log-normal
       | distribution.
        
         | TrainedMonkey wrote:
         | All data is linear when plotted on a loglog scale with a thick
         | marker.
        
           | aquafox wrote:
           | But in my explanation, there is no x axis.
        
         | H8crilA wrote:
         | The CLT does not require iid (independent, identically
         | distributed) variables. Just independent and having a variance,
         | plus some rather weak condition on slightly higher orders.
         | Otherwise the variables can be quite different from each other.
        
       | BinRoo wrote:
       | One of my favorite tricks in elementary school was to convince
       | people I can calculate any logarithm for any number of their
       | choosing.
       | 
       | > Me: Pick any number.
       | 
       | > Friend: Ok, 149,135,151
       | 
       | > Me: The log is 8.2
       | 
       | Of course I'm simply counting the number of digits, using 10 as
       | the base, and guessing the last decimal point, but it certainly
       | impressed everyone.
        
       ___________________________________________________________________
       (page generated 2025-03-13 23:00 UTC)