[HN Gopher] Kelly Can't Fail
       ___________________________________________________________________
        
       Kelly Can't Fail
        
       Author : jmount
       Score  : 341 points
       Date   : 2024-12-19 23:07 UTC (23 hours ago)
        
 (HTM) web link (win-vector.com)
 (TXT) w3m dump (win-vector.com)
        
       | dado3212 wrote:
       | Very cool writeup, would've benefited from some LaTeX formatting.
        
         | jmount wrote:
         | Thank you, and sorry. The Wordpress/Markdown path seems to be
         | getting worse over time.
        
           | smcin wrote:
           | How is the Wordpress/Markdown path getting worse over time?
        
             | jmount wrote:
             | Mathjax used to be available more places. Jupyter used to
             | respect spacing around h2 labels. Things like that.
        
       | Vecr wrote:
       | Checks out with multiple RNG seeds.
       | 
       | It shouldn't be a problem because the RNG is advanced each run.
       | Might save someone a check though.
        
         | jmount wrote:
         | I love that. Gaming the seed is always a possibility in demos.
        
       | hawkjo wrote:
       | Very cool to see no variance in the outcome. But that also makes
       | it feel like there should be a strategy with better expected
       | return due to the unique problem structure. Do we know if the
       | Kelly strategy is optimal here?
        
         | jmount wrote:
         | The book claims it is optimal for a set of strategies they
         | called "sensible." I didn't think the argument flowed as well
         | as the zero variance part of the proof, so I didn't work it in.
         | I think the source also hinted at a game-theory proof as they
         | called the sub-strategies in the portfolio "pure strategies."
        
         | rahimnathwani wrote:
         | Do we know if the Kelly strategy is optimal here?
         | 
         | What do you mean by optimal? Do you mean you're willing to risk
         | going bankrupt, if it means a higher expected value?
        
           | scotty79 wrote:
           | Surely there's some space between risking to go bankrupt and
           | risking of getting less than 9.08 return guaranteed by Kelly
           | strategy.
           | 
           | If you are willing to take some risk in exchange for
           | possibility of higher payout just bet a bit more then Kelly
           | recommends. That's your "optimal" strategy for the amount of
           | risk you are willing to take. I imagine it's expected return
           | is the same as Kelly and calculating it's variance is left as
           | the exercise for the reader.
        
             | rahimnathwani wrote:
             | I imagine it's expected return is the same as Kelly
             | 
             | Given two options with the same expected return, most
             | people would prefer the lower variance.
             | 
             | Accepting higher variance with no increase in expected
             | return has a name: gambling.
        
         | barbegal wrote:
         | It is optimal for expected returns yes.
        
         | travisjungroth wrote:
         | I have a feeling it's the highest EV. I tried a strategy of
         | flipping all the cards until there's only one color left and
         | then betting it all every time. Ran a million trials and got
         | 9.08.
         | 
         | I was thinking these are very different strategies, but they're
         | not exactly. The Kelly strategy does the same thing when
         | there's only one color left. The difference is this strategy
         | does nothing before that point.
         | 
         | Still, they feel like limit cases. Betting it all with only one
         | color left is the only right move, so it's what you do before
         | that. Nothing and Kelly seem like the only good strategies.
        
           | foota wrote:
           | Ah, but these aren't the same. The Kelly strategy has zero
           | variance, whereas this strategy likely has very high
           | variance.
           | 
           | It would be interesting to do the math and show why they're
           | equal. It seems like you should be able to make the same sort
           | of portfolio probability argument.
        
             | foota wrote:
             | To start, your minimum return is 2x, and depending on how
             | many cards of a single color are left at the end, you get a
             | return of 2^N. You could take the summation of those N card
             | returns, times the probability of each, and that must come
             | out to 9.08 on average.
             | 
             | I guess the number of possible arrangements of cards with N
             | of one color remaining is... The number of permutations of
             | N times 2 times the number of permutations of 52 minus N
             | times 26 choose N?
             | 
             | Ah, yes this works, you can see it here: https://www.wolfra
             | malpha.com/input?i=%28summation+of+N%21+*+....
             | 
             | That is: (summation of N! * (52 - N)!* (26 choose N) *
             | 2^N/52! from N=0 to 26 (for some reason the * 2 for
             | different suits was over counting, so I removed it. Not
             | sure why? Also it seems like it should be from 1 to 26, but
             | that also doesn't give the right answer, so something is
             | whack)
        
             | travisjungroth wrote:
             | Of course they're not the same. They have the same EV and
             | the strategies do the same thing in a condition that always
             | happens: there's only one color left. The variance is
             | wildly different.
        
         | lupire wrote:
         | The Kelly criterion _is_ the strategy with better return due to
         | the uniquely problem structure.
        
         | OscarCunningham wrote:
         | In this game, all strategies have the same expected value, so
         | long as they follow the rule 'if the remaining deck is all the
         | same colour, then you should bet everything you have on that
         | colour'.
        
       | raydiak wrote:
       | As a guy named Kelly, I appreciate the vote of confidence!
        
         | pvg wrote:
         | I think you're underselling it a bit, it's a decree of
         | confidence rather than a mere vote.
        
       | barbegal wrote:
       | It would have been a better demo if reduced to more manageable
       | numbers e.g. a deck of 2 black and 2 red cards.
       | 
       | Turn 1 r = b so no bet
       | 
       | Turn 2 bet 1/3 on whichever card wasn't revealed in turn 1.
       | 
       | Turn 3 either you were wrong on turn 2 and you now have 2/3 of
       | your stake but you know the colour of the next two cards so you
       | can double your stake each time to end up with 4/3 after turn 3
       | or you were right and you have 4/3 of your stake but have one of
       | each red or black left so you don't bet this turn.
       | 
       | Turn 4 you know the colour of the final card so you double your
       | money to 8/3 of your original stake.
       | 
       | And then the exercise to the reader is to prove optimality (which
       | is fairly straightforward but I don't believe there is a short
       | proof)
        
         | stevage wrote:
         | Agreed, I could follow the general argument but not enough to
         | be convinced about why the result is exactly the same
         | regardless of the order of cards.
        
         | libraryofbabel wrote:
         | Yes. Although four cards has only one nontrivial branch, on
         | turn 3. So, start out with the four cards example, and then
         | show tree diagrams for the 5 and 6 cards cases (still
         | manageable numbers) to build intuition for induction to the
         | general case.
        
       | siavosh wrote:
       | Can anyone comment on the universal portfolio article linked in
       | the conclusion? Asking for a friend.
        
         | robbomacrae wrote:
         | It's a theory on how to optimally rebalance your investment
         | portfolio every day.
         | 
         | Original paper by Thomas Cover:
         | https://isl.stanford.edu/~cover/papers/paper93.pdf
         | 
         | A good breakdown with links to code examples by Andy Jones:
         | https://andrewcharlesjones.github.io/journal/universal-portf...
        
       | malisper wrote:
       | I need to do some Math, but I wonder if there's a better strategy
       | than Kelly betting. An assumption made for Kelly betting is the
       | bets are independent of each other. That's not the case in the
       | problem given.
       | 
       | After making a bet, you gain information about the contents of
       | the rest of the deck of cards. I could see it being possible to
       | do better by pricing in that information into your bet.
        
         | amluto wrote:
         | But the information gained in this game is independent of your
         | bet. The multi-armed bandit problem is a famous example of the
         | opposite situation.
        
         | necovek wrote:
         | That seems to be exactly what this strategy is doing: at every
         | step, you account for the probability of the red or black card
         | coming up, and bet accordingly (both the sum and the colour).
        
       | tooblies wrote:
       | It's really disappointing that the code examples aren't given in
       | PyGyat.
        
       | pcthrowaway wrote:
       | Note that you need to be able to infinitely divide your stake for
       | this to work out for you all the time.
       | 
       | For example, if the deck has 26 red cards on top, you'd end up
       | dwindling your initial $1.00 stake to 0.000000134 before riding
       | it back up to 9.08
        
         | jmount wrote:
         | Very good point. I did some experiments and the system is very
         | sensitive to any sort of quantization or rounding of bets. You
         | get the expected value about the right place, but the variance
         | goes up quickly. So in addition to your important case, things
         | are a bit dicey in general.
        
         | boothby wrote:
         | If you start out with a $1e12 stake, you're able to avoid
         | catastrophic rounding errors even in the worst case. There's
         | probably a life lesson here.
        
           | fragmede wrote:
           | Is the lesson: choose to be born to wealthy parents?
        
             | darkerside wrote:
             | Or is it to choose appropriate betting amounts based on
             | your capacity for risk
        
               | Onavo wrote:
               | IEEE-754 isn't precise enough for my capacity :( I too
               | need rich parents.
        
               | laidoffamazon wrote:
               | I guess it then does follow that having rich parents does
               | expand your capacity for risk!
        
               | darkerside wrote:
               | This is a truism
        
               | User23 wrote:
               | The lesson I'm taking away is "learn math and how to use
               | it."
        
               | barrenko wrote:
               | Learn math and discover poignantly all the situations
               | where it is effectively useless.
        
               | paulluuk wrote:
               | Applying math to a more practical betting situation, like
               | poker, is a lot harder. You'd have to be able to
               | calculate your exact odds of winning given only a small
               | amount of information, without a calculator and without
               | it taking so long that the other players notice, and then
               | also factor in the odds that the other players are
               | bluffing and the advantages that you might have from
               | (not) bluffing.
        
               | Etheryte wrote:
               | Or is it to choose appropriate betting amounts based on
               | your parents?
        
             | croes wrote:
             | It's easier to make money if you already habe money
        
             | mannykannot wrote:
             | It would really help if your parents know someone who can
             | and will take the other side in this game.
        
             | renewiltord wrote:
             | A popular view is that having wealthy parents gives one a
             | great advantage. Another popular view is that working
             | extraordinarily hard for money is a waste of one's life
             | even if one gets the money. But the two are only consistent
             | if one believes that one's own life is the optimization
             | target. If I live a life of misery so that my children live
             | a life of prosperity that would strike me as a phenomenal
             | result.
             | 
             | So another reading is "choose to give your children wealthy
             | parents".
        
           | cbsks wrote:
           | My simulation shows that with a 52 card deck, if you round
           | the bet to the nearest $.01 you will need to start with
           | $35,522.08 to win a total of $293,601.28.
           | 
           | If you start with $35,522.07 or less, you will lose it all
           | after 26 incorrect cards.
        
             | boothby wrote:
             | Nearest rounding does seem like a mistake here. Rounding
             | down is quite safe: rather than lose it all, you end up
             | with at least 2^26 pennies.
        
         | kamaal wrote:
         | >>Note that you need to be able to infinitely divide your stake
         | for this to work out for you all the time.
         | 
         | This is what most people discover, you need to play like every
         | toss of the coin(i.e tosses over a very long periods of time).
         | In series, like the whole strategy for it to work as is. You
         | can't miss a toss. If you do you basically are missing out on
         | either series of profitable tosses, or that one toss where you
         | make a good return. If you draw the price vs time chart, like a
         | renko chart you pretty much see a how any chart for any
         | instrument would look.
         | 
         | Here is the catch. In the real world stock/crypto/forex trading
         | scenario that means you basically have to take nearly trade.
         | Other wise the strategy doesn't work as good.
         | 
         | The deal about tossing coins to conduct this experiment is _you
         | don 't change the coin during the experiment_. You don't skip
         | tosses, you don't change anything at all. While you are trading
         | all this means- You can't change the stock that you are
         | trading(Else you would be missing those phases where the
         | instruments perform well, and will likely keep landing into
         | situations with other instruments where its performing bad),
         | you can't miss trades, and of course you have to keep at these
         | for very long periods of time to work.
         | 
         | Needless to say this is not for insanely consistent. Doing this
         | day after day can also be draining on your mental and physical
         | health, where there is money there is stress. You can't do this
         | for long basically.
        
           | teo_zero wrote:
           | While I don't agree on nearly anything you stated, I enjoyed
           | your prose: I suppose you left out words here and there as a
           | metaphorical proof of your claim that you can't miss a single
           | toss, didn't you?
        
             | kamaal wrote:
             | >>I suppose you left out words here and there as a
             | metaphorical proof of your claim that you can't miss a
             | single toss, didn't you?
             | 
             | You must always practice in real world conditions. Notice
             | in the experiments conducted in programs, you are taking
             | series of tosses as they come, even if they are in
             | thousands in numbers, one after the other, without missing
             | a single one. Unless you can repeat this in a live
             | scenario. This is not a very useful strategy.
             | 
             | Kelly criterion is for people who are planning to take
             | large number of trades over a long period of time, hence
             | the idea is to ensure failures are not fatal(this is what
             | ensures you can play for long). As it turns out if you play
             | for really long, even with a small edge, small wins/profits
             | tend to add to something big.
             | 
             | If you remove all the math behind it, its just this. If you
             | have a small edge to win in a game of bets, find how much
             | you can bet such that you don't lose your capital. If you
             | play this game for long, like really really long, you are
             | likely to make big wins.
        
               | teo_zero wrote:
               | You are conflating 2 concepts: a) that the reality
               | converges to what the theory predicts only after a great
               | number of samples; b) that if you skip some events the
               | results will vary.
               | 
               | Now, b) is false. You can change the code to extract 3
               | random numbers each time, discard the first 2 and only
               | consider the third one, the results won't change.
               | 
               | Instead a) is generally true. In this case, the Kelly
               | strategy is the best strategy to play a great number of
               | repeated games. You _could_ play some games with another
               | strategy and win more money, but you 'll find that you
               | can't beat Kelly in the long term, ideally when the
               | repetitions approach infinity.
        
               | kamaal wrote:
               | >>Now, b) is false. You can change the code to extract 3
               | random numbers each time, discard the first 2 and only
               | consider the third one, the results won't change.
               | 
               | Might be in theory. In practice, this is rarely true.
               | 
               | Take for example in trading. What happens(is about to
               | happen), depends on what just happened. A stock could
               | over bought/over sold, range bound, moving in a specific
               | direction etc. This decides whats about to happen next.
               | Reality is rarely ever random.
               | 
               | Im sure if you study a coin toss for example, you can
               | find similar patterns, for eg- if you have tired thumb,
               | Im pretty sure it effects the height of the toss,
               | effecting results.
               | 
               | >>Instead a) is generally true. In this case, the Kelly
               | strategy is the best strategy to play a great number of
               | repeated games.
               | 
               | Indeed. But do make it a point to repeat exact sequences
               | of events you practiced.
        
           | auc wrote:
           | If you assume coin tosses are independent, it shouldn't
           | matter if you miss coin tosses.
        
             | kamaal wrote:
             | Coin tosses are not independent. Unless the premise is
             | coins toss themselves.
             | 
             | A person tosses a coin, so tosses are are connected to each
             | other.
             | 
             | Ask yourself this question- Would your thumb hurt if you
             | toss a coin 5000 times? If so, would that change the
             | results?
        
               | PaulHoule wrote:
               | Naturally tossed coins tend to land on the same side they
               | started with 0.51 of the time, see
               | 
               | https://www.stat.berkeley.edu/~aldous/157/Papers/diaconis
               | _co...
        
               | aidenn0 wrote:
               | Linked paper does not state that; it states that tossed
               | coins tend to be _caught_ on the same side they stared
               | with slightly more than half the time. The results
               | explicitly exclude any bouncing (which will happen if a
               | coin lands on a hard surface).
               | 
               | The paper does discuss coins allowed to land on a hard
               | surface; it is clear that this will affect the
               | randomness, but not clear if it increases or decreases
               | randomness, and suggests further research is needed.
        
         | tgma wrote:
         | Yup, the dual would be saying Martingale can't fail with
         | infinite money.
        
           | aidenn0 wrote:
           | It's not because there is a finite amount of money at which
           | this can't fail, which is never the case for martingale.
           | Martingale is actually likely to bankrupt you against a
           | casino that is much more well staked than you _even if_ you
           | have a small advantage.
        
         | ab_goat wrote:
         | Finally a real world use case for bitcoin!
        
         | nyeah wrote:
         | It's a good point. I think it affects the realism of the model.
         | When the stake is very low, finding a penny on the street gives
         | an astronomical improvement in the end results. At the high
         | end, it's possible the counterparty might run out of money.
        
       | ed-209 wrote:
       | why use a static seed on the random generator and could that be
       | making this appear more interesting than it might otherwise?
        
         | jfengel wrote:
         | The idea is sound. The static seed is presumably so the results
         | are repeatable, but it works for true randomness. (Assuming you
         | were permitted to do it this way, which you wouldn't be.)
        
       | fancy_pantser wrote:
       | A very similar card game played by deciding when to stop flipping
       | cards from a deck where red is $1 and black is -$1 as described
       | in Timothy Falcon's quantitative-finance interview book (problem
       | #14). Gwern describes it and also writes code to prove out an
       | optimal stopping strategy: https://gwern.net/problem-14
        
         | jmount wrote:
         | That is a nice game and writeup.
        
         | snthpy wrote:
         | Nice!
         | 
         | Only quibble i have is that black should be +$1 and red -$1 to
         | follow standard finance conventions, i.e. be in the "black" or
         | "red".
        
       | JohnMakin wrote:
       | Kelly criterion is one of my favorite game theory concepts that
       | is used heavily in bankroll management of professional gamblers,
       | particularly poker players. It is a good way to help someone
       | understand how you can manage your finances and stakes in a way
       | that allows you to climb steadily forward without risking too
       | much or any ruin, but is frequently misapplied in that space. The
       | problem is kelly deals with binary results, and often situations
       | in which this is applied where the results are not binary (a
       | criteria for applying this) you can see skewed results that look
       | almost right but not quite so, depending on how you view the math
        
         | amluto wrote:
         | > particularly poker players
         | 
         | The Kelly criterion seems excellent for many forms of gambling,
         | but poker seems like it could be an exception: in poker, you're
         | playing against other players, so the utility of a given
         | distribution of chips seems like it ought to be more
         | complicated than just the number of chips you have.
         | 
         | (I'm not a poker player.)
        
           | tempestn wrote:
           | It's used for bankroll management (basically deciding what
           | stakes to play) rather than for sizing bets within a
           | particular game.
        
           | fernandopj wrote:
           | Chris "Jesus" Ferguson "proved" an application of this theory
           | back in ~2009 [1]. He was a the time promoting Full Tilt and
           | commited to turn $1 dollar bankroll to $10000 by applying a
           | basic strategy of never using more than a low % of his
           | bankroll into one tournament or cash game session.
           | 
           | So, if one's skill would turn your session probability to
           | +EV, by limiting your losses and using the fact that in poker
           | the strongest hands or better tourney positions would give
           | you a huge ROI, it would be just a matter of time and
           | discipline to get to a good bankroll.
           | 
           | Just remember that for the better part of this challenge he
           | was averaging US$ 0.14/hour, and it took more than 9 months.
           | 
           | [1] https://www.thehendonmob.com/poker_tips/starting_from_zer
           | o_b...
        
             | kelnos wrote:
             | > _Just remember that for the better part of this challenge
             | he was averaging US$ 0.14 /hour, and it took more than 9
             | months._
             | 
             | But consider the rate of return! He turned $1 into $10,000
             | in 9 months. Could he then turn that $10k into $100M in
             | another 9 months?
             | 
             | Or if he'd started with $100 instead of $1, could he have
             | turned that into $1M in 9 months? That would still be
             | incredibly impressive.
             | 
             | Certainly the game changes as the bets and buy-ins get
             | higher, but even if he couldn't swing the same rate of
             | return with a higher starting point and larger bets (though
             | still only betting that same certain low percent of his
             | bankroll), presumably he could do things like turning $5k
             | into $1M. Even $100k into $1M would be fantastic.
        
               | lupire wrote:
               | I think the challenge is that the larger you bet, the
               | harder it is to find people who are bad at basic strategy
               | poker but willing to bet against you for a long series of
               | bets.
        
         | bloodyplonker22 wrote:
         | You are right that Kelly criterion deals with binary results.
         | This won't work for poker. In poker, we use expected value
         | because wins and losses are not binary because of the amount
         | you win or lose. Once you figure out your approximate EV, you
         | use a variance calculator in addition to that (example:
         | https://www.primedope.com/poker-variance-calculator/) to see
         | how likely and how much it is you will be winning over a
         | certain number of hands in the long run.
        
         | peter_retief wrote:
         | Could this work with roulette betting on color? Seems like you
         | could spend a lot of time not winning or losing
        
           | plorkyeran wrote:
           | Roulette results are uncorrelated and you have the exact same
           | chance of winning each time, so the Kelly criterion isn't
           | applicable. Betting on a color has a negative edge and you
           | don't have the option of taking the house's side, so it just
           | tells you the obvious thing that you should bet zero.
        
             | dmurray wrote:
             | > exact same chance of winning each time, so the Kelly
             | criterion isn't applicable.
             | 
             | Actually, the main assumption that leads to the Kelly
             | criterion is that you will have future opportunities to bet
             | with the same edge, not constrained by the amount.
             | 
             | For example, if you knew this was your last profitable
             | betting opportunity, to maximise your expected value you
             | should bet your entire stake.
             | 
             | I'm slightly surprised it leads to such a nice result for
             | this game - I don't see a claim that this is the optimal
             | strategy for maximizing EV zero variance is great, but
             | having more money is also great.
             | 
             | Of course you are right about roulette and, if you are
             | playing standard casino roulette against the house, the
             | optimal strategy is not to play. But that's not because
             | bets are uncorrelated, it's because they are all negative
             | value.
        
             | Tepix wrote:
             | What makes 0 better than the other numbers?
        
               | Vecr wrote:
               | Can't bet negative in that kind of game. If a game is
               | expected to lose you money, don't play.
        
               | lupire wrote:
               | $0, not 0 on the wheel.
        
       | amluto wrote:
       | > The problem and solution appear to come from Thomas Cover.
       | 
       | I don't recall this specific example, but I learned about the
       | Kelly criterion in a class that Thomas Cover taught. He was one
       | of my favorite teachers, and any discussion with him was
       | guaranteed to be interesting and worthwhile. RIP.
        
       | moonlion_eth wrote:
       | I was like "oooh fun a card game" then was like "oh shit I'm too
       | dumb for this math"
        
         | IAmGraydon wrote:
         | You aren't dumb. You just don't have enough exposure to the
         | prerequisites.
        
           | necovek wrote:
           | It could also be both: though it's not necessarily that they
           | are "dumb", but that the language of mathematics is something
           | they can't get their head around, even if they can understand
           | the concepts when described in spoken language.
           | 
           | Eg. it's probably pretty easy to convince them that with 15
           | cards in a deck, out of which 5 are red and 10 are black,
           | chances are bigger (and in particular 10/15 or ~67%) that
           | they'll pull out a black card, and that you should bet more
           | on this happening. If you happen to miss, you should only bet
           | even more on black since the chances grow further -- to be
           | able to maintain this strategy, you only need to never bet
           | too much so you have enough "funds" to bet all the way
           | through (eg. in the worst case where the least likely thing
           | happens: in my example, that would be 5 red cards coming up
           | first).
           | 
           | Putting all this reasoning into formulae is what math is, and
           | I do believe some struggle with abstracting these more than
           | others (which is why the divide does exist and why many
           | people believe those good at math are "smart", which is very
           | much not so -- seen plenty of "stupid" mathematicians, even
           | professors). Does not make them "dumb", but might make them
           | "modern math dumb". A signal that someone can be good at math
           | today is that they are unfazed with more-than-3-dimensional
           | spaces (you need to stop tying things to physical world).
        
       | lupire wrote:
       | Corollaries, by considering different deck shufflings, such as
       | perfectly interleaved as perfectly separated:
       | 9.08 ~              52/52 x 52/51 x 50/50 / 50/49 x ... 2/2 x 2/1
       | = 52/51 x 50/49 x ... x 2/1             = 2^52 x 26!2 / 52!
       | = (52/52 x 50/51 x ... x  2/27) x (52/26 x 50/25 x ... x 2/1)
       | 
       | and these equalities can also be directly verified algebraically
       | 
       | This also points to a non-"many worlds"/portfolio version of the
       | prod of zero-variance.
       | 
       | Every bet is e/d, where e is current edge and d is current deck
       | size. So every outcome multiplies the stack by (d + e x
       | (-1)^i)/d, where is +-1, depending on win or lose.
       | 
       | Note that the product of all the values of d is constant, so we
       | can ignore the denominator.
       | 
       | Since we know (from the OP proof) that the product of these
       | numbers is constant for all shuffles of the deck, we can split a
       | shuffled deck anywhere such that both parts are balanced
       | red=blue, and the total (multiplicative) return over each part of
       | the deck is constant across all shuffling of that part of the
       | deck. (There are at least two ways to prove this part!)
       | 
       | This is gives a further hint toward another fascinating fact:
       | over any span of the deck between points where the deck is
       | balanced, the numerators of the bet results double-cover all the
       | even numbers between the starting and ending deck size.
       | 
       | To see why:
       | 
       | * A loss after a loss has a numerator (deck minus edge) of 2 less
       | than the previous bet, as the deck size decreased by 1 and the
       | edge has inccreased by 1.
       | 
       | * A win after a win also has a numerator (deck plus edge) of 2
       | less than the previous bet, as the deck size decreased by 1 and
       | the edge has decreased by 1.
       | 
       | * A win after a loss, causes a big swing in the numerator,
       | exactly back to the largest not yet double-covered numerator that
       | started the streak that just ended. Then the new win streak
       | continues making the second cover of even numerators, until... a
       | loss after a win jumps the numerator back to continuing the
       | sequence of decreasing even numberators, which will get their
       | second cover later when the later wins come.
       | 
       | Since the deck is balanced, the number of wins always equals the
       | number of losses, as long as we consider the 0 wager on a
       | balanced subdeck to be a loss, since it increases the edge like
       | non-degenerate losses do.
       | 
       | (When the deck is balanced, edge is 0, so the return of no-bet is
       | same as a win is same as a loss)
       | 
       | You can visualize the numerator changes like so: a crane is
       | driving from 52 to 0. Its arm is pointing either forward or
       | backward, and there is a counterweight of the same length
       | pointing in the opposite direction. At each step, the crane arm
       | is either pointing toward 0 and stretches another step toward 0,
       | or points backward to 52 and shrinks (toward 0 milestone and
       | toward 0 arm length), or it swings to the other direction.
       | Whenever the crane stretches toward 0, the counterweight
       | stretches backward, its end not moving relative to the ground.
       | 
       | Because the deck is balanced at start and empty deck is balanced,
       | the crane starts and ends with a 0-stretch arm. The front side is
       | either the frame arm stepping 2 steps forward at a time relative
       | to the ground, or holding still while the backside crane arm
       | shrinks closer, and the crane arm occasionally flips back and
       | forth pointing forward or ackward. And vice versa for the
       | counterweight.
       | 
       | Over the course of the drive, the crane arm end reaches every
       | even milestone once pointing forward and once again pointing
       | backward.
        
       | lupire wrote:
       | Intuition for the bet size:
       | 
       | When the deck has d cards left, it is sensible to make d bets of
       | 1/d your stack, where each bet is that one specific card is next.
       | If there are r reds and b=r+e blues, r of these bets simply
       | cancel out r other bets, leaving e (times 1/d) remaining to be a
       | nontrivial bet.
        
       | andrewprock wrote:
       | In practice, there are a number of factors which make using Kelly
       | more difficult than in toy examples.
       | 
       | What is your bankroll? Cash on hand? Total net worth? Liquid net
       | work? Future earned income?
       | 
       | Depending on the size of your bankroll, a number of factors come
       | in to play. For example, if your bankroll is $100 and you lose it
       | all it's typically not a big deal. If you have a $1 million
       | bankroll, then you are likely more adverse to risking it.
       | 
       | What is the expected value? Is it known? Is it stationary? Is the
       | game honest?
       | 
       | Depending on the statistical profile of your expected value, you
       | are going to have to make significant adjustments to how you
       | approach bet sizing. In domains where you can only estimate your
       | EV, and which are rife with cheats (e.g. poker), you need to size
       | your wagers under significant uncertainty.
       | 
       | What bet sizes are available?
       | 
       | In practice, you won't have a continuous range of bet sizes you
       | can make. You will typically have discrete bet sizes within a
       | fixed range, say $5-$500 in increments of $5 or $25. If your
       | bankroll falls to low you will be shut out of the game. If your
       | bankroll gets too high, you will no longer be able to maximize
       | your returns.
       | 
       | At the end of the day, professional gamblers are often wagering
       | at half-kelly, or even at quarter-kelly, due in large part to all
       | these complexities and others.
        
         | zahlman wrote:
         | > In practice, you won't have a continuous range of bet sizes
         | you can make.
         | 
         | You may also be required to pay for the privilege of placing a
         | bet (spread and commissions in trading; the rake at a casino
         | table).
        
       | ilya_m wrote:
       | Beautiful, thanks for sharing it!
       | 
       | I think the portfolio argument is an unnecessary detour though.
       | There's a two-line proof by induction.
       | 
       | 1. The payoff in the base case of (0,1) or (1,0) is 2.
       | 
       | 2. If we are at (r,b), r >=b , have $X, and stake (r-b)/(r+b) on
       | red, the payoff if we draw red and win is X * (1+(r-b)/(r+b)) *
       | 2^(r+b-1) / (r+b-1 choose r-1) = X * 2^(r+b) * r / ((r+b) *
       | (r+b-1 choose r-1)) = X * 2^(r+b) / (r+b choose r).
       | 
       | Similarly, if we draw black and lose, the payoff is X *
       | (1-(r-b)/(r+b)) * 2^(r+b-1) / (r+b-1 choose r) = X * 2^(r+b) * b
       | / ((r+b) * (r+b-1 choose r)) = X * 2^(r+b) / (r+b choose r). QED
        
         | lupire wrote:
         | Why isn't your inductive proof an unnecessary detour?
        
       | lordnacho wrote:
       | Interesting side note on Kelly:
       | 
       | In probability theory, Proebsting's paradox is an argument that
       | appears to show that the Kelly criterion can lead to ruin.
       | Although it can be resolved mathematically, it raises some
       | interesting issues about the practical application of Kelly,
       | especially in investing. It was named and first discussed by
       | Edward O. Thorp in 2008.[1] The paradox was named for Todd
       | Proebsting, its creator.
       | 
       | https://en.wikipedia.org/wiki/Proebsting%27s_paradox
        
         | dominicrose wrote:
         | Quoting the same page: One easy way to dismiss the paradox is
         | to note that Kelly assumes that probabilities do not change.
         | 
         | That's good to know. Kelly is good if you know the
         | probabilities AND they don't change.
         | 
         | If you don't know or if they can change, I expect the right
         | approach has to be more complex than the Kelly one.
        
           | cubefox wrote:
           | In particular, then the right approach has to be more risk
           | averse than Kelly would recommend. In reality, most
           | probabilities can only be estimated, while the objective
           | probabilities (e.g. the actual long run success rate) may
           | well be different and lead to ruin. That's also what makes
           | the title "Kelly can't fail" more wrong than right in my
           | opinion.
        
             | LegionMammal978 wrote:
             | For the issue in Proebsting's paradox, one simple approach
             | I've found successful is to gradually accumulate your full
             | bet as the betting lines progress to their ultimate
             | positions. This works even in illiquid markets where your
             | bets affect the lines, since it gives the other
             | participants less room to suddenly react to what you're
             | doing. (Though you always have the slight worry of a huge
             | last-second bet that _you_ can 't react to, eBay-auction
             | style.)
             | 
             | As for the actual probability being different from the
             | expected probability, that's not too difficult to account
             | for. Just set up a distribution (more or less generous
             | depending on your risk tolerance) for where you believe the
             | actual probability may lie, and work out the integrals as
             | necessary, recalling that you want to maximize expected
             | log-value. It's not the trivial Kelly formula, but it's
             | exactly the same principle in the end.
        
               | cubefox wrote:
               | I think the problem with estimating a distribution is the
               | same, it might simply not match reality (actual unknown
               | success rates, actual unknown variance of your estimation
               | of success rates being correct). In particular, if you
               | are too optimistic relative to reality, Kelly betting
               | will lead to ruin with high objective probability.
        
             | lupire wrote:
             | The title is gentle clickbait applies to one specific game
             | with 0 variance, not to all uses of Kelly.
        
           | csours wrote:
           | Unfortunately, in the real world, playing the game changes
           | the game.
           | 
           | For instance, casinos have different payout schedules for
           | Blackjack based on minimum bet size and number of decks in
           | the shoe. Payouts for single deck Blackjack are very small in
           | comparison to multi-deck games, as well as requiring larger
           | minimums (and they shuffle the deck after only a few hands).
        
       | PaulHoule wrote:
       | When I was a teen I discovered that I could always guess more
       | than half the cards right using card counting to determine what
       | color is more common in the deck. I programmed my
       | 
       | https://en.wikipedia.org/wiki/TRS-80_Model_100
       | 
       | to simulate it and it never failed. Recently I thought about it
       | again and wrote a Python script that tried it 30 million times
       | and... it never failed.
       | 
       | I've been thinking about what to do with it and came up with the
       | options of (i) a prop bet and (ii) a magic trick, neither of
       | which seemed that promising.
       | 
       | As a prop bet I can offer $1000 to somebody's $10 which is not
       | the route to great prop bet profits, also I worry that if I make
       | a mistake or get cheated somehow I could be out a lot of money.
       | (Now that I think of it maybe it is better if I re-organize it as
       | a parlay bet)
       | 
       | As a magic trick it is just too slow paced. I developed a patter
       | to the effect that "Parapsychologists were never able to reliably
       | demonstrate precognition with their fancy Zener cards, but I just
       | developed a protocol where you can prove it every time!" but came
       | to the conclusion that it was not entertaining enough. It takes a
       | while to go through a deck which doesn't seem like a miracle, you
       | will have to do it 7 times in a row to exclude the null
       | hypothesis at p=0.01. Maybe somebody with more showmanship could
       | do it but I gave up.
        
         | jdhwosnhw wrote:
         | That reminds me of my favorite algorithm, which can find the
         | majority element in a list with any number of distinct entries
         | _while using O(N) time and O(1) space_ (provided a majority
         | element exists). I sometimes pose deriving this algorithm as a
         | puzzle for people, no one has ever solved it (nor could I).
         | 
         | https://en.m.wikipedia.org/wiki/Boyer%E2%80%93Moore_majority...
        
           | barapa wrote:
           | That is really cool
        
           | lupire wrote:
           | What's great about that is that the assumption (or O(n)
           | check) that the majority exists is incredibly powerful,
           | enabling the algorithm, which is nearly the dumbest possible
           | algorithm, to work.
           | 
           | The one flaw in the magic is that "2nd pass to verify" is a
           | significant cost, transforming the algorithm from online
           | streaming O(1) space to O(n) collection-storage space.
        
       | im3w1l wrote:
       | This article uses _stake_ to mean _bankroll_ , but usually it
       | denotes _bet size_.
        
       | bell-cot wrote:
       | Interesting as a mathematical puzzle - but note that it's
       | difficult to find cooperative, solvent counter-parties for "I
       | can't lose" betting games.
        
       ___________________________________________________________________
       (page generated 2024-12-20 23:00 UTC)