[HN Gopher] Ergodicity, What's It Mean
       ___________________________________________________________________
        
       Ergodicity, What's It Mean
        
       Author : simonebrunozzi
       Score  : 77 points
       Date   : 2021-04-03 16:45 UTC (6 hours ago)
        
 (HTM) web link (avoidboringpeople.substack.com)
 (TXT) w3m dump (avoidboringpeople.substack.com)
        
       | contravariant wrote:
       | Unless I'm very mistaken the example they give is not a dynamic
       | system, so asking whether it's ergodic doesn't make sense. For it
       | to be a dynamic system it would need to have an invariant
       | measure, which I can't even begin to figure out from their
       | description.
       | 
       | The process of simply flipping coins, even a biased coin, _does_
       | correspond to a dynamic system where the space is all infinite
       | sequences of heads and tails, the invariant measure is the joint
       | probability, and the operator is the left-shift (i.e. if you
       | sample 1 series of coin flips and throw away the first result you
       | still end up with a sample of the same distribution).
       | 
       | But this doesn't translate _at all_ to their proposed scenario of
       | starting everyone at 1 and letting the results diverge from
       | there.
        
         | base698 wrote:
         | An you example I always liked: baggage fees at airlines. On
         | average a $25 bag fee is the same. For 4 business travelers.
         | For a family of four while the dollar amount is the same it
         | effects the family much more.
        
           | contravariant wrote:
           | I'm confused, what's that got to do with ergodicity?
        
       | jmount wrote:
       | I have some notes I share on ergodicity here: https://win-
       | vector.com/2012/02/04/ergodic-theory-for-interes...
        
         | jsweojtj wrote:
         | Note: the link in the post to the pdf doesn't work.
        
       | thewayfarer wrote:
       | If the loss amount is adjusted to 33%, then on average over time
       | individuals will make a net profit. A 50% win will more than
       | compensate for a 33% loss (0.67 * 1.5 = 1.005).
        
         | kpwagner wrote:
         | The greater the edge, the more you can bet on one occurrence.
         | Easier to understand if you look at binary events: double your
         | bet with a win, lose your bet with a loss.
        
         | fractionalhare wrote:
         | Yes. Given 1000 players and 1000 turns, if each player starts
         | with $100 in capital under your chosen parameters:
         | import random              l = 0.33         w = 0.5         c =
         | 100         m = 1000         p = {k: c for k in range(m)}
         | n = 1000              for k in range(n):             for j in
         | range(m):          if random.choice([0,1]):              p[j]
         | += (w * p[j])          else:              p[j] -= (l * p[j])
         | print(sum([p[k] for k in p]) / len(p))              print(sum(1
         | for k in p if p[k] > c) / len(p))
         | 
         | I wrote this up quickly so there might be an error, but under
         | your stated parameters the average wealth increases over time
         | and most people end up wealthier than they started.
         | Specifically, the number of people who will be wealthier at the
         | end seems to converge to somewhere between 57-60%.
         | 
         | NB: This assumes you bet your entire capital each round instead
         | of a constant bet size. In the presence of non-ergodicity you
         | wouldn't want to do this, but that just means it's an even
         | stronger result that most people come out ahead.
         | 
         | In fact 33% happens to be the maximum loss percentage this
         | system (win rate, win percentage, bet = total capital) can
         | tolerate while still exhibiting higher wealth for most players
         | over time :)
        
       | nonameiguess wrote:
       | He actually picked bad numbers. That is a losing bet even on
       | average. You can see that pretty easily by just assuming you get
       | exactly the opposite result each time. 1.5 * 0.6 = 0.9, so no
       | matter how you start, you're behind with an equal number of wins
       | and losses. Naively, you might think "50% is more than 40%," but
       | that isn't how percentages work. You need win and loss
       | proportions with a geometric average of 1 for a breakeven
       | expectation (percentages are multiplicative, not additive). In
       | this case, 3/2 and 2/3 is what you'd need.
       | 
       | This happens to not show after only 100 trials just because some
       | tiny number of people get really lucky and draw up the ensemble
       | average, but if you keep going, somewhere between 200 and 500
       | trials, the ensemble average pretty quickly drops below the
       | starting average wealth and stays there, asymptotically
       | approaching 0.
        
         | tryptophan wrote:
         | OOf, yeah, expected value is a very misleading metric. Hard for
         | me to wrap my head around it.
         | 
         | After 2 tosses, the probable outcomes are 25% 2.25, 50% .9, and
         | 25% .36, giving an expected value of 1.1025 interestingly
         | enough. Overall a 75% chance of losing money.
        
         | maximilianroos wrote:
         | This is wrong and misses the point of the post.
         | 
         | The median outcome is indeed negative, for the reason you give.
         | But the mean outcome is positive, because some players become
         | exceedingly rich.
         | 
         | You can try it at home, here's some Julia which runs it over 1M
         | people, each with 1K flips:                 using Distributions
         | n = 1000       d = Binomial(n, 0.5)       to_wealth(heads) =
         | 1.5^heads * 0.6^(n - heads)       rand(d, 1_000_000) .|>
         | to_wealth |> mean
         | 
         | You can keep running that, it's above 1 almost all the time.
         | 
         | To look at this another way -- would you take the other side of
         | the bet? Someone on average has to be making money, and the
         | other side is clearly losing money.
        
           | nonameiguess wrote:
           | I did run this many times by adapting the OP's own code (and
           | vectorizing it, which he asked someone to do). Here it is:
           | import numpy as np         import pandas as pd
           | n_subjects = 100         n_trials = 1000         start = 10.0
           | win = 1.5         loss = 0.6         prob = 0.5
           | results = np.ones((n_trials, n_subjects)) * start         for
           | trial in range(1, n_trials):             wins =
           | np.random.binomial(1, prob, n_subjects) == 1
           | results[trial, wins] = results[trial - 1, wins] * win
           | results[trial, ~wins] = results[trial - 1, ~wins] * loss
           | ax = pd.DataFrame(results).plot(legend=False,
           | figsize=(18,10), logy=True, linewidth=0.5)
           | ax.plot(results.mean(axis=1), color='red', lw=2,
           | linestyle='--')
           | 
           | The mean always trends to 0 and every single player
           | eventually loses. There are never any winners at all past
           | around 500 trials or so. Not sure how you're getting a
           | different result as I have never used Julia and can't tell
           | what your code is doing (except apparently something
           | different).
        
             | javitury wrote:
             | Try these parameters
             | 
             | n_subjects = 1000
             | 
             | n_trials = 100
        
             | pcmonk wrote:
             | I think your n_subjects is too low. You need that to be
             | high enough or you'll miss those low-probability winners
             | that bring up the average.
        
               | nonameiguess wrote:
               | I did it with more subjects and it doesn't make a
               | difference. The only reason I reduced to 100 is because
               | the plot is unreadable otherwise.
               | 
               | Looking at the Julia code, I think what he is doing wrong
               | is making all wins worth $.50 and all losses worth $.40,
               | but the bet computes a win or loss based on your current
               | wealth, not your starting wealth. His formula would work
               | if you were always betting $1 no matter what your
               | bankroll was, but that isn't what the actual post
               | stipulates.
        
         | alkonaut wrote:
         | It's just a really weird way of expressing the bet. This bet
         | either has a positive EV or a negative one and if it's positive
         | it's going to tend to infinity when repeated.
         | 
         | There is one assymmetry at the zero point (assuming people
         | can't recover from bankruptcy by borrowing another dollar) but
         | that's easily fixed by adding a simple bet strategy e.g "bet at
         | most 1/10 your bankroll on each bet".
        
           | jsweojtj wrote:
           | > This bet either has a positive EV or a negative one and if
           | it's positive it's going to tend to infinity when repeated.
           | 
           | This is wrong. The bet as described has a positive EV and the
           | time average for a single player tends to zero as the bet is
           | repeated.
           | 
           | > There is one assymmetry at the zero point (assuming people
           | can't recover from bankruptcy by borrowing another dollar)
           | ...
           | 
           | The result is not due to zero being an absorbing value. In
           | the setup you can go arbitrarily small and come back without
           | issue. The result is the same.
        
         | jsweojtj wrote:
         | The ensemble average is the expected value, and the expected
         | value is positive.
         | 
         | For a bet of $X, the expected value is: (1.5 * X) * 0.5 + (0.6
         | * X) * 0.5 => 1.05 * X. The ensemble average per round is
         | positive (1.05) and over multiple rounds smoothly tends to
         | infinity with the number of bets. (Definition here:
         | https://en.wikipedia.org/wiki/Expected_value).
         | 
         | The time average for any specific person betting in this game
         | is 0.95 * X (for the reasons you mention) and tends to zero
         | with the number of bets.
         | 
         | So let's go through a few specifics of your comment:
         | 
         | > He actually picked bad numbers. That is a losing bet even on
         | average.
         | 
         | The point of this article is that "on average" is trickier than
         | people tend to assume. There are different ways of taking
         | averages. If you do the expected value calculation and get a
         | positive number, you might (as other comments have said
         | explicitly) expect that a participant repeatedly engaging such
         | a bet would have his wealth trend toward infinity. But, they
         | are wrong (as shown in the article).
         | 
         | > This happens to not show after only 100 trials just because
         | some tiny number of people get really lucky and draw up the
         | ensemble average, but if you keep going, somewhere between 200
         | and 500 trials, the ensemble average pretty quickly drops below
         | the starting average wealth and stays there, asymptotically
         | approaching 0.
         | 
         | The ensemble average is positive and monotonically increases w/
         | the number of rounds of betting.
        
       | loup-vaillant wrote:
       | This would be easier to understand if the author used standard
       | vocabulary. Forget about "time average" and "ensemble average",
       | what's important here is distinguishing the _expectation_ of a
       | bet, from the _distribution_ of outcomes. In the bet he
       | describes, the expectation is indeed greater than one, but the
       | distribution means you 'll most probably end up poorer, while a
       | lucky few get super-rich.
       | 
       | Who here would take a bet where there's a 95% chance of losing
       | their home and their well paying job, for a 5% chance of becoming
       | a billionaire? I sure wouldn't.
       | 
       | My take on this: don't stop at averages, look at the whole
       | distribution.
        
         | __s wrote:
         | If there's confidence in this distribution, you make a pool of
         | people who take these bets & divide the result
        
           | loup-vaillant wrote:
           | Successful societies do exactly that. It's called taxes.
           | 
           | As for what a realistic bet would look like (you're founding
           | a startup or something), I believe the expectation is often
           | not much greater than 1, so one does not simply found 100
           | startups and distribute the income of the 5 successful ones
           | to everyone else. (And even if it _is_ , the people capable
           | of founding startups often have steadier, though less
           | impressive, means of increasing their wealth. Startups are
           | often founded for reasons other than wealth, after all.)
        
         | nonameiguess wrote:
         | Although these notions are standard language, I still agree
         | with you that the author made it a bit more complicated than it
         | needs to be. Ergodicity for a stochastic process just means the
         | joint distribution of random variables that make up the sample
         | space is time invariant.
         | 
         | For a coin toss example like this, the distribution of heads
         | and tails in each trial is ergodic. The distribution of
         | earnings is not. This isn't because of any difference between
         | time average versus ensemble average. It's because the
         | probability of winning each toss is time invariant but the
         | amount you stand to win or lose isn't because it's a function
         | of both the probability of winning and your current bankroll,
         | and current bankroll is not time invariant.
         | 
         | Although, ironically, because of the numbers he picked, all
         | bankrolls tend to zero eventually, so over a large enough
         | number of trials, wealth eventually becomes an ergodic process
         | as well. Graphing out his scenario over more trials gives a
         | sort of heat death of the universe plot, where some players
         | stay alive longer than others, but in the long run, the enemy
         | always wins.
        
         | fractionalhare wrote:
         | "Time average" and "ensemble average" _are_ standard vocabulary
         | in the statistical mechanics literature. Your comment is
         | essentially a restatement of the article 's point.
         | 
         | I think it's uncharitable to say the article would be easier to
         | understand if it didn't use the language of ergodicity. Its
         | explicit goal is to show how non-ergodicity leads to an example
         | like yours.
         | 
         | So of course your comment seems easier to understand. But
         | that's because you're just saying different distributions can
         | be parameterized by the same mean. Ergodicity is about a lot
         | more than that, and the language of ergodicity was the entire
         | exercise here.
        
           | loup-vaillant wrote:
           | Thing is, I don't believe we even _care_ about the time
           | average. What we care about is the evolution of the
           | distribution of outcomes over time.
           | 
           | More specifically:
           | 
           | - The distribution of outcomes at certain points of interest
           | in time (like the valuation of my company when I intend to
           | sell it).
           | 
           | - The probability that we cross a catastrophic threshold at
           | some point (like bankruptcy).
           | 
           | Time average is a _terrible metric_ to estimate those things.
           | Heck, I 'm not sure it can measure anything of interest,
           | besides our own mistaken intuitions. It should probably be
           | called something like "time average fallacy".
        
             | fractionalhare wrote:
             | I'm a little confused - ergodic theory very much cares
             | about the time average. Or do you mean the toy example of
             | betting shouldn't care about it?
             | 
             | It seems like you think the problem here is too
             | unsophisticated for ergodic theory or something. Which,
             | fine sure. But this isn't an article intended to teach you
             | about betting. It's an article intended to teach you about
             | ergodicity, using betting as a _toy example._ The author
             | isn 't trying to introduce the best way to analyze betting
             | strategies, they're trying to show what non-ergodicity is.
             | And I think they basically succeed.
             | 
             | Just meet the article where it is, for its intended usage.
        
           | kgwgk wrote:
           | > "Time average" and "ensemble average" are standard
           | vocabulary in the statistical mechanics literature
           | 
           | But their application to non-standard-mechanical things is
           | very confusing.
           | 
           | Of course wealth is not ergodic. Ergodicity would mean that
           | the distribution is always the same. Every point in time
           | would be identical to every other point in time and growth
           | would be impossible.
        
             | fractionalhare wrote:
             | I agree it's not perfectly explained. But I think someone
             | new to ergodic theory would find the article clearer (or at
             | least more helpful overall) than your second paragraph
             | here.
        
               | kgwgk wrote:
               | "What we're seeing is that even though the expected value
               | is positive, and the ensemble average is increasing, the
               | time average for any single person is usually decreasing.
               | The average of the entire "system" increases, but that
               | doesn't mean that the average of a single unit is
               | increasing."
               | 
               | Someone new to ergodic theory may understand from that
               | article that if wealth was ergodic the average for every
               | trajectory would increase like the average for the entire
               | system. But that doesn't make sense.
        
       | amelius wrote:
       | > Wealth in this scenario is non-ergodic, since the wealth in the
       | future depends on the wealth of the past (path dependence).
       | 
       | Perhaps we should find a tax rule which makes wealth ergodic.
        
         | hirundo wrote:
         | That's a straightforward proposal. Making wealth non-path
         | dependent means seizing and redistributing it periodically or
         | continuously. That would tend toward economic equality, but
         | since it has the side effect of suppressing the incentive to
         | create wealth, it's an equality of poverty.
        
           | andi999 wrote:
           | People might be happier with an equalitity of poverty (but
           | has to be on a world scale).
        
           | amelius wrote:
           | > That's a straightforward proposal. Making wealth non-path
           | dependent means seizing and redistributing it periodically or
           | continuously. That would tend toward economic equality, but
           | since it has the side effect of suppressing the incentive to
           | create wealth, it's an equality of poverty.
           | 
           | Did you consider the case where A works twice as hard as B
           | and ends up with twice the wealth of B?
        
           | igorkraw wrote:
           | I mean, all you'd need is a damping factor via a wealth tax,
           | above some threshold that's still motivating enough. I doubt
           | anyone would call everyone having 1 million dollars "equality
           | of poverty", and you can make it so that if you are _really_
           | good at business, you can outrun the wealth tax up to say 100
           | million $. Oh, and 100% inheritance tax on estates above 1
           | million, with a buyback right for familiy businesses that
           | allows you to buy back the business at it 's current market
           | value over X years (so you actually benefit from inflation).
           | 
           | IFF implemented globally, would you truly argue this
           | disincentivizes creation of wealth? Add inflation and
           | exchange rate adjustments and I honestly don't buy that
           | argument anymore.
           | 
           | NOW, will you have selfish actors trying to game the system
           | and evade these taxes through all means possible? Yes, but
           | that's why I think anyone who supports the protection of
           | private property through state violence and democracy at the
           | same time needs to do some heavy gymnastics to justify tax
           | evasion _and_ dynasty enabling tax policies (i.e., anything
           | that doesn 't at least do the 100% inheritance/gift tax bit).
           | And of course you'd need to implement it either globally or
           | at least in economic powerhouse blocks like EU+US+Canada.
        
           | otde wrote:
           | I wonder if that's not necessarily a negative thing. Much of
           | the beauty of the open-source ecosystem comes from its
           | (typical but not guaranteed) lack of explicit wealth
           | incentives. It feels to me like the idea that wealth
           | redistribution _necessitates_ widespread poverty is almost
           | purely speculative, though I'm happy to be corrected.
        
             | jandrewrogers wrote:
             | I think open source is an example of how any set of
             | incentives cuts both ways, it is just prioritizing
             | different things. Incentives are offsetting; things like
             | wealth incentives are often valuable to encourage people to
             | do important things they otherwise would have no
             | incentivize to do in open source.
             | 
             | For example, a well-known issue with open source data
             | infrastructure is that it often has much lower performance
             | and efficiency than equivalent proprietary software. There
             | are many dis-incentives in the open source ecosystem to
             | producing software that is highly performant and efficient,
             | not the least of which is development complexity and
             | sophistication level required to contribute. Open source
             | developers do not pay the operational cost of wasteful data
             | infrastructure but they _do_ pay the cost of their time,
             | and prioritize accordingly. Proprietary data infrastructure
             | is explicitly motivated by wealth incentives to be highly
             | efficient, which is why companies invest in it even though
             | open source equivalents exists.
             | 
             | The relative wastefulness of open source in terms of
             | computing resources is increasingly perceived as bad for
             | the environment, so it isn't just a money motivation.
             | Incentives are a powerful thing and it is evident that open
             | source lacks incentives to produce some important outcomes.
        
               | inimino wrote:
               | This sounds like a just-so story about incentives and
               | open-source software, and seems to ignore some of the
               | significant motivations actually at play, as well as some
               | fairly glaringly obvious historical examples. What are
               | the specific examples you have in mind here?
        
             | nlitened wrote:
             | We could also say that much of the beauty of the open-
             | source ecosystem comes from top 1% earners (software
             | developers) having cushy jobs and some free time to invest
             | in common good.
        
           | mensetmanusman wrote:
           | Not necessarily, society could decide what a value of "I win"
           | income is.
           | 
           | Maybe it's $1 billion, after which you hit a ceiling function
           | or wealth becomes ergodic, and you earn a badge that says
           | "you won" (in the context of this society).
           | 
           | If you earn such a badge, society would call on you for
           | advise (if you are a non-inheritor).
        
             | ckw wrote:
             | The value should be low enough as to make capture of the
             | political system at the national level by small groups of
             | wealthy individuals impractical.
        
             | 6gvONxR4sf7o wrote:
             | I love this idea of giving people a badge or even better a
             | sticker if they hit some particular level above which we
             | take their excess. It's hilarious.
        
       | 6gvONxR4sf7o wrote:
       | This one took me way too long to get intuition for and I don't
       | think OP's explanation would have helped me grok it. I wonder if
       | it's like monads, in that the intuition is really simple, but
       | it's a very unique concept; so unusual that the intuition is hard
       | to convey to the uninitiated. But the intuition really is simple,
       | so the initiated feel compelled to try to convey how simple it
       | is.
       | 
       | My intuition for it is like a particular kind of spread out
       | mixing. Imagine a giant bowl with a bunch of crazy high powered
       | pinball bumpers [0] at the bottom, randomly jostling the pinballs
       | around, sometimes kicking them out where they started, but they
       | always come back eventually.
       | 
       | If you roll a ball into it, it doesn't matter where you start.
       | It'll get lost in the mix eventually. (it "mixes" sufficiently).
       | 
       | And no matter where you start, those high powered bumpers will
       | eventually happen to kick a pinball all the way out there again,
       | given long enough. (The "mixing" spreads things out sufficiently
       | and occasionally sends things all the way to any point in the
       | bowl).
       | 
       | By contrast, a bowl that's just high friction, where everything
       | ends up stopped at the bottom wouldn't work, even though it makes
       | it not matter where you started, because it doesn't "spread." It
       | just sinks things to the same spot. An inverted bowl/a dome
       | wouldn't work because starting on opposite sides means you'll
       | just roll away from each other and never come together (no
       | "mixing" at all). A bowl without bumpers would have you coming
       | back where you started, but not "mixing" around to all the other
       | spots.
       | 
       | You need both elements. It has to not matter where you started
       | specifically by getting back to where anyone started.
       | 
       | Rereading my comment now, it really does come off as "a monad is
       | like a burrito," doesn't it. But screw it, I'll hit post and
       | maybe it helps somebody.
       | 
       | [0] This kind of bumper: https://encrypted-
       | tbn0.gstatic.com/images?q=tbn:ANd9GcQa3gvh...
        
         | ssivark wrote:
         | Here's the essence of non-ergodicity: the "typical" result
         | (technically the mode, but practically the median is
         | acceptable) is very different from the "mean" result. The
         | language of "time" -vs- "ensemble" averages is simply about
         | whether to include only the "typical" possibilities or to also
         | include corner cases which have extremely low probability but
         | extremely high payoffs. The technical point is that unless you
         | get to play the game ("sample the process") absurdly many times
         | (comparable to the total number of possibilities), you will
         | never access the corner cases, and so the averaging over the
         | "typical" results corresponds better to reality. The "mixing"
         | idea means that if you run the process "long enough" then there
         | won't be any corner cases -- the extreme possibilities will
         | thoroughly mix with the typical possibilities, so both kinds of
         | averaging will lead to the same (correct) answer.
         | 
         | If you grok this, everything else is technical detail and
         | window dressing.
        
         | nvader wrote:
         | I found this helpful. Not because it has helped me to grok
         | ergodicity yet, but because it's another foothold on my way to
         | climb the mountain. At some point it will click, and your
         | pinball-bowl mixer will help, I'm sure.
         | 
         | I'm also happy for the link to "a monad is like a burrito"!
        
       | pm90 wrote:
       | Honestly I would recommend the video the author linked to which
       | to was much clearer in its explanation https://youtu.be/CCLtQHL-
       | VUs.
        
       | fractionalhare wrote:
       | The thrust of the point is that while the wealth of a group will
       | rise on average when playing a game with positive expected value,
       | individuals with significant upfront losses will lose over time
       | if the reward percentage is too close to the loss percentage.
       | Because your future wins depend on your present capital, which in
       | turn depends on your past wins. This becomes an optimization
       | problem!
       | 
       | This does _not_ mean that you shouldn 't play a game with
       | positive expected value. Expected value is still the salient
       | framework with which you should judge risk. It just means that
       | the size of your bet needs to be considered in conjunction with
       | your total capital, not just whether any individual bet is more
       | likely to win than lose.
       | 
       | The author states this seems to not be well known in finance, but
       | in point of fact this is very well known in both literature and
       | practice. A trading strategy with positive expected value has
       | additional considerations before you execute on it, including
       | your total capital and liquidity.
        
         | SatvikBeri wrote:
         | Well, not just upfront losses, since it's commutative. You
         | break even if you have 10 wins for every 9 losses, which will
         | happen to very few people with enough flips - but the amount
         | you win if you have more victories than that threshold is quite
         | large, while you can only lose $1 at most.
        
       | vladTheInhaler wrote:
       | I wrote up a jupyter notebook myself looking at this problem a
       | couple years ago and I was thinking about cleaning it up into a
       | blog post. Oh well. Guess that just goes to show the dangers of
       | procrastination.
       | 
       | For what it's worth, my takeaway was that the "paradox" is that
       | we aren't accounting for the nonlinear utility of money.
       | Therefore the exponentially unlikely probabilities of winning
       | quadrillions of dollars have exponentially large weights. But a
       | quadrillion dollars isn't a million times more useful to me than
       | a billion dollars. So if you account for that saturation effect
       | and take the expected _utility_ instead, the  "paradox" goes
       | away.
        
       | oh_sigh wrote:
       | ergodic example: Rolling a die. If you roll a die 1e6 times in a
       | row, you will get each number approximately 1e6/6 times. If you
       | roll 1e6 dies once all at the same time, you will get each number
       | approximately 1e6/6 times. Same thing basically.
       | 
       | non-ergodic example: Russian roulette. If you play russian
       | roulette 1e6 times in a row, you will always be dead at the end.
       | If 1e6 people play Russian roulette at the same time, 1e6*(5/6)
       | people will be alive at the end, and only 1e6/6 people will be
       | dead.
        
         | ryebit wrote:
         | Thank you! That's... well, beautiful isn't the right word,
         | given nature of example... But I was hoping to find a simple
         | non-ergodic example.
         | 
         | The situation the article explores was interesting, but made
         | the jump to something mathematically complex before I sunk my
         | teeth into the fundamental bit.
        
           | aliceryhl wrote:
           | As I understand it, the difference is basically whether it is
           | possible to end up in a situation where you are out of the
           | game going forward. E.g. with repeated bets, you eventually
           | hit zero money, and then you are stuck at zero forever.
        
             | inimino wrote:
             | There could also be attractor states that reduce the risk.
             | For example in this case (if the numbers are taken to be
             | such that the EV is positive) then you can also end up with
             | a lucky player getting rich. The chance of that player
             | going bust goes down much lower than the chance for a new
             | player starting with $1. So while individual players may
             | tend to go bust reliably, the total pool of wealth can
             | still grow beyond any set upper boundary. Over time each
             | player tends to get fabulously rich or go bust, so the game
             | is mostly one of whether an initial run of luck gets you
             | out of the danger zone before running into zero.
             | 
             | If you added some effects on what kind of gambles are
             | available to players at different levels, you can create
             | several different attractor states.
             | 
             | Ergodicity is a nice property of models like molecules of
             | gas bouncing around a room, which means that statistical
             | mechanics is practical. If one percent of the molecules
             | tended to end up with all the kinetic energy, while the
             | other molecules gradually one by one reached a complete
             | standstill, then statistical mechanics wouldn't work.
             | 
             | Since the very simple process shown in the article doesn't
             | have this property, it means some familiar statistical
             | tools can't be used naively with these models, or to
             | extrapolate a little bit, to any model of any human
             | activity that tends to these kinds of capturing, fixed-
             | point, attractor outcomes.
        
       | Ceezy wrote:
       | Proving that a measure is invariant is usually hard or impossible
       | (in physics). It's useful as toy experiment but usually not for
       | real life example.
        
       | nelsondev wrote:
       | This example assumes you only have $1 to bet, and if you lose it,
       | you're out of the game. I wonder what happens to the simulated
       | outcome if you can keep betting, even if you lose.
        
         | vitus wrote:
         | Not quite.
         | 
         | > Everyone starts with $1, gets 50% profit if they win, and
         | pays 40% of their bet if they lose.
         | 
         | From the wealth-over-time graph, it looks like the bet is sized
         | such that it's always 100% of what you have (per some
         | trajectories going as small as 10^-7).
         | 
         | My read is that while each individual bet looks good in
         | isolation, the fact is that one win and one loss puts you in
         | the red overall -- when dealing with this iterated experiment
         | you want to look not at E[X] = 0.05, but at E[log(X)] = -0.05
         | to get a sense for how your assets evolve each round.
         | 
         | For some intuition: if you win once and lose once, your net
         | result is 1.5 * 0.6 = 0.9, so you've lost 10% of your starting
         | money.
        
           | vitus wrote:
           | To expand on this further, and to plug the Kelly Criterion
           | which the author mentions having written a former blog post
           | on:
           | 
           | Suppose instead of betting all your money every round, you
           | instead decide to bet 10 cents each time. Now, instead of
           | being essentially guaranteed long-term ruin, you can and most
           | likely will be able to continue making money indefinitely.
           | (In fact, your chance of ever dipping below, say, $0.50 is
           | finite even when extending your rounds played arbitrarily.)
           | 
           | The Kelly Criterion for this scenario actually dictates that
           | you should bet 25% of your money each round. Using this
           | betting strategy, somewhere around 70% of people end up
           | making money off this game when run for 100 rounds (1% end up
           | ending up with a respectable $25 or more, while about as many
           | end up with <$0.15). You even have an opportunity for
           | redemption -- when we drag out the horizon to 5000 rounds
           | played, somewhere around 90% of individuals become
           | _billionaires_ , even as 30% of people were behind after 100
           | rounds (so 2/3 of those redeemed themselves).
           | 
           | On the other hand, with the all-or-nothing solution outlined
           | in the article, about 13% of the population coming out ahead
           | (around 1% of the population gets really rich, ending up with
           | >$200, while more than half end up with less than a penny).
           | Meanwhile, the odds get worse as the game goes on, as at just
           | 500 rounds, >80% of players have been reduced to less than a
           | penny.
           | 
           | That's a long-winded way of saying that the amount you bet is
           | really important.
        
       ___________________________________________________________________
       (page generated 2021-04-03 23:01 UTC)