Quote: P90
I probably need. If the number of trials is unlimited, it means something else is the limit. In high PA situations, the limit can be either the maximum win or the maximum win rate. For instance, you need to win $X; you can not win more, because [select reason:] the opponent will run out of money; the opponent will sweat the money; the hole will be plugged.
The business of math works as follows. After the tedious work of definitions is done, one finally assumes something. Then, after all assumptions have been made, one can state a theorem. If the theorem can be proven, then the theorem is absolutely true, provided all assumptions are met.
What you are doing is: you reject all assumptions because of [select reason]. Sorry bro, without assumptions you can't do much in mathematics. One of the assumptions of Kelly criterion is, that each game is *independent* from any previous game. This means (among other things) that all future games and options are available and stay the same, whether or not you win/lose any amount or even play any other game. Under this assumption, Kelly criterion is the best way for bankroll growth (maximize expected logarithm of bankroll).
If you think winnings from previous game influence your future game selection - you might probably be right. But this is out of scope of Kelly criterion. If you want to analyze best way for bankroll growth, you need to state the exact rules how each game is available by previous results and choices. Again, if you cannot formulate these assumptions, any mathematical reasoning is worthless.
Quote: MangoJWhat you are doing is: you reject all assumptions because of [select reason].
No. It referred to a specific situation: "unlimited number of trials". Which is a nice abstraction. But with an unlimited number of trials, you're almost guaranteed to win all the money in the world either way. Hence bankroll growth rate would be unimportant.
My goal is simple, to disprove the notion that "Kelly Criterion is the best amount to bet in any player advantage situation".
This only requires one counterexample.
Counterexample: Given a game with 90% to win even money, 10% to lose, min bet 1 unit, max bet 10 units, bankroll 10 units, 10,000 trials.
KC f*=(0.90*2-1)/2=0.80
KC bet = 8 units, odds 0.90 to win, 0.10 lose. Two consecutive losses will give instant ruin. 0.01 probability at the beginning.
ROR check (MC) gives approx. 5% for overall risk of loss below ability to continue the game, but 0.01 above is gntd.
Avg. time to reach betting max is 1 bet.
Thus total win ~= (0.95..0.99)*(.9*8+9999*.9*10)=85500 with 5% ROR, 89098 with 1%
Alternative bet sizing = 1 unit, 0.1 BR later, first bet 1 unit: ROR (formula) is in ppb range. Avg. time to reach betting max is 27 bets.
Total expected win ~= 1*(.9*102+9973*.9*10)=89848 units.
89848>89098, therefore bet sizing at 0.1 BR is preferable to KC bet sizing at 0.8 BR.
Quote: WizardYou should have bet about 10% of your BR per hand. Maybe you overbet it, but it was a good bet.
There are two and only two wealth creation formulae. Unfortunately, most AP only know one of them, which is the Kelly criterion; however, very few know a full Kelly bettor has about a 13% chance of busting out in BJ.
Even at 10% advantage, it's still a HUGE risk -- most BJ players would bet quarter-Kelly so that makes the recommended bet per hand at 2.5% of your bankroll.
Quote: strictlyAPim sure some people on here will say it serves me right but i just blew out over 25k and set myself back over a year in terms of ap play
I was playing at a local casino and noticed a dealer completly flashing a hold card almost every time- i ran to the bank pulled out most of my roll and headed back as fast as I could- sat down bought in table max of 2500 and lost the first 6 hands in a row while seeing her card , im so sick right now
first three hand got 18 19 and 19 in blackjack 20 and 20- didnt hit even thuogh it killed me, and didnt surrender so I didnt blow cover
fourht hand 83 vs 6 with a 9 in the hold double pull a 8 dealer gets 20
fifth hand 66 into a 5 and ten underneath, split get a 5 double get 4 second 6 get a 3 double pull a7 dealer gets 18 next two hands 17 into an 18 and 16 into a 6- with four underneath, hit got a ten didnt matter- next hand dealer blackjack and hnestly forget the rest, how to lose 25k in 10 minutes
If you read about Grosjean's exploits, he usually has condeferate(s) when the dealer fails to protect his hole card in blackjack. And there's this great story of Grosjean's signaling his confederate to double down on a hard twenty since the next card is an ace.
Quote: P90Quote: MangoJWhat you are doing is: you reject all assumptions because of [select reason].
No. It referred to a specific situation: "unlimited number of trials". Which is a nice abstraction.
Kelly criterion does not assume anything about the number of tries available. In fact you would also use the Kelly criterion if you only had a single try available.
Quote:
My goal is simple, to disprove the notion that "Kelly Criterion is the best amount to bet in any player advantage situation".
No need to do that. We all know that Kelly criterion is not the best play in *any* advantage situation. It is the best play in a situation, where you find independent games.
Regarding your game
Quote:
Counterexample: Given a game with 90% to win even money, 10% to lose, min bet 1 unit, max bet 10 units, bankroll 10 units, 10,000 trials.
KC f*=(0.90*2-1)/2=0.80
KC bet = 8 units, odds 0.90 to win, 0.10 lose. Two consecutive losses will give instant ruin. 0.01 probability at the beginning.
You are right, the first bet is 8 units. If you lose you are down to 2 units. Kelly criterion would then be to bet 2*0.8 = 1.6 units. Since you only can bet 0 units, 1 units or 2 units, maximum expected log bankroll is to bet 1 unit.
Then you lose again, you have 1 unit left. Expected log bankroll for a full bankroll bet is -infininity, since log(2) is some finite number, but log(0) is -infinity. So maximum expected log bankroll is to bet *nothing* if you are down to 1 unit (the minimum bet).
First of all I cannot see why this situation is a "ruin", because you still have 1 unit left - although you cannot play. If you count a 1 unit bankroll as a ruin, then you really are playing with a 9 unit bankroll at the beginning. But then you have the same problem: 1 unit left from a 9 unit bankroll counts as a "ruin", so you really are playing with a bankroll of 8 units. ... You see ? The problem is not the Kelly bet, but *your* definition of (1 unit left is a "ruin"). The only consistent definition of ruin is "0 units in bankroll is a ruin".
Second, you might find a strategy (i.e. always bet 1 unit) which has a lower RoR (for any definition of "ruin"), but it won't maximize bankroll growth. While you can play the game safer by only betting 1 unit, you will grow your bankroll much slower.
So what exactly do you want to optimize ? Bankroll growth ? RoR ? length of play ?
Kelly criterion optimizes expected bankroll growth. Not playing the game optimizes RoR. Betting 1 unit (probably, I didn't checked it) optimizes length of play.
I cannot see this as a counterexample, because you are switching between different goals back and forward, trying to prove that strategy A is not best for goal X because strategy B is better at goal Y.
The part about unlimited trials has nothing to do with KC and has to do with the post I was responding to.Quote: MangoJKelly criterion does not assume anything about the number of tries available.
Quote: MangoJNo need to do that. We all know that Kelly criterion is not the best play in *any* advantage situation. It is the best play in a situation, where you find independent games.
No. Not even there. In *some* of these situations - but not all of them.
Quote: MangoJFirst of all I cannot see why this situation is a "ruin", because you still have 1 unit left ... The only consistent definition of ruin is "0 units in bankroll is a ruin".
Which is the definition I used. More precisely, I defined ruin as having less than 1 unit, thus being unable to play.
Quote: MangoJSecond, you might find a strategy (i.e. always bet 1 unit) which has a lower RoR (for any definition of "ruin"), but it won't maximize bankroll growth. While you can play the game safer by only betting 1 unit, you will grow your bankroll much slower.
This isn't true. Have you missed the numbers I provided at the end?
KC betting (0.8 x bankroll, rounded by regular rules) results in an average total win of 85500, or 8.55 units per trial.
0.1 x bankroll betting results in an average total win of 89848, or 8.985 units per trial.
8.985>8.55. It's actually 5% higher. So the more conservative strategy results in growing your bankroll, on the average, faster, not slower.
Being faster comes from the fact that once you bust, your gain in all further trials is 0, lowering the average for high-risk strategies.
You seemed to be arguing about the rounding. You can if you wish to, but it's only digging a deeper hole for your position - its problem is not with rounding, but with optimizing for an irrelevant parameter.
Quote: MangoJI cannot see this as a counterexample, because you are switching between different goals back and forward, trying to prove that strategy A is not best for goal X because strategy B is better at goal Y.
I have never switched goals. I set to disprove the claim that KC is the universal optimum bet size in a player advantage game (of multiple independent trials etc). It is not a strawman, it is a claim that has been frequently made or implied. You just repeated it in your last post, in the second quote here.
I did what I set out to do. And I wasn't "trying to prove" it, but have proven it. A statement like that is disproved with a single counterexample. One is provided above. Countless others could be provided if someone cared to, though they may be more difficult to demonstrate (and forum member strictlyAP was in one such situation).
In the example above, a simple conservative strategy provided better results than following the KC.
Quote: P90This isn't true. Have you missed the numbers I provided at the end?
KC betting (0.8 x bankroll, rounded by regular rules) results in an average total win of 85500, or 8.55 units per trial.
0.1 x bankroll betting results in an average total win of 89848, or 8.985 units per trial.
8.985>8.55. It's actually 5% higher. So the more conservative strategy results in growing your bankroll, on the average, faster, not slower.
Average bankroll is irrelevant for Kelly betting, because KC optimizes *log* bankroll, not bankroll. Run the numbers again and please compare then
Quote:
I did what I set out to do. And I wasn't "trying to prove" it, but have proven it.
Sorry no. You stated that strategy B (betting 0.1 x bankroll) is better for goal Y (average bankroll) than strategy A (betting 0.8 x bankroll). However strategy A claims to optimize goal X (average log bankroll). So nothing proven here regarding Kelly criterion.
Quote: MangoJSorry no. You stated that strategy B (betting 0.1 x bankroll) is better for goal Y (average bankroll) than strategy A (betting 0.8 x bankroll). However strategy A claims to optimize goal X (average log bankroll). So nothing proven here regarding Kelly criterion.
Quote: MangoJAverage bankroll is irrelevant for Kelly betting, because KC optimizes *log* bankroll, not bankroll. Run the numbers again and please compare then
It's entertaining when people fall into traps that I didn't even need to set.
You need to brush up on your math. Strategy B is better for average bankroll and even more so for average log bankroll. It should have been obvious, since it puts chance of success ahead of gain in case of success.
Do I need to calculate it for you, or would you like an argument about what base should be used first?
Hint: you'd be comparing 0.95*log(outcome A) vs 1*log(outcome B). The latter is still larger by the same 5%.
One could even bring exact outcome probability distribution into this, but it doesn't make the case any different.
What's of interest, though, that for log(BR) it will apply even with a much smaller number of trials (ROR remains at 0.95 even for 100 trials):
0.95*(7.2+99*9) > (.9*102+9*73), but 0.95*log(7.2+99*9) < log(.9*102+9*73)
So with 100 trials the paranoid strategy of betting 0.1 BR brings lower average win, but still higher average log(win).
Only with a further smaller number of trials does KC overtake.
...You didn't even understand what you were arguing, did you? Aside from "being for KC" - but without understanding why KC even chooses to optimize for log(BR), how the log function behaves, what would be the strategy for straight max EV, and why specifically KC is good for its optimization parameter.
Or why specifically it was beaten by an even more conservative strategy in this game - if you did, you wouldn't bury yourself deeper, since log optimization favors conservative strategies.