August 11th, 2012 at 3:08:39 PM
permalink

Error made.

August 11th, 2012 at 4:01:15 PM
permalink

I'm not entirely sure that a fixed threshold bet/call strategy will perform better than a strategy involving a random element about the bet or call decision.

August 11th, 2012 at 4:42:44 PM
permalink

Quote:MangoJI'm not entirely sure that a fixed threshold bet/call strategy will perform better than a strategy involving a random element about the bet or call decision.

I've been wrong before, but I'll post what I think the answer is tomorrow. I would challenge anybody to produce a better strategy for either player to go against my strategy.

It's not whether you win or lose; it's whether or not you had a good bet.

August 11th, 2012 at 4:53:17 PM
permalink

Here is my answer:

Player X should raise if the number is below 0.1 or above 0.7

Player Y calls if number is above 0.1

Expected Value for player X is 0.1

Player X should raise if the number is below 0.1 or above 0.7

Player Y calls if number is above 0.1

Expected Value for player X is 0.1

August 11th, 2012 at 6:53:19 PM
permalink

I get the same answer = here's my method and btw I guessed the right answer after 6f as it made the maths simpler, before proving it!

(1) Firstly if X checks then it's a 50% game

(2) Secondly if X always bets then Y will call with .25+ (as stands to lose $1 vs gain $3)

(2 a) Hence bad strategy for X to "always" bet.

(3) From above it can be seen Y should always call any bet with expected chance of winning is 25% or more

(4) Is it worth X bluffing ??

(4 a) Sometimes wins pots which he should lose

(4 b) Sometime gets paid off on good hands

(4 c) Against bets lost when "bluff" called.

(5) Consider X bets with [0,A) (bluffing) and (B,1] (good hand)

(5 a) Y will call with a hand that is 1/4 way along the range

(5 b) Obviously the call point has to be in the (B,1] range so bluffs gain

(5 c) Mathematically (A-0) has to be less than 3*(1-B) (i.e. only bluff sometimes).

(6) If A iz zero (i.e. never bluff) Y will call 1/4 into your range

(6 a) Suppose you bet with .6 or more

(6 b) Y will call with .7 or more else fold

(6 c) Where you both have .7 or more, it remains 50-50 (sometimes you win one more, or lose one more)

(6 d) Where you have .6xxx you can win pots you shouldn't (Y = .6xxx) but will lose an extra $1 if Y has a good hand.

(6 e) Since you lose $1 with p=.3 and only gain $2 with p<.1 …

(6 f) Unless you bluff, you are better off not betting.

Suppose you bluff [0 - 0.1) and bet [0.7 - 1]

Obviously if you check then the action has no change on what should have happened.

The figures above mean Y should call with any hands of .7 or above.

Thus if you both have good hands they means sometimes you win 1 more and sometimes you lose 1 more

If you have a good hand and Y doesn't you were destined to win pot, so no change

The advantage comes from the bluffing times.

(a) Y has hand (0 - 0.1) - half the times you've stolen a pot - so profit = $2 * ( .1 * .1 /2)

(b) Y had hand (0.1 - 0.7) - you've stolen a pot - so profit = $2 * (.1 * .6)

(c) Y had hand (0.7-1) - you've lost an extra £1 - so loss = $1 * (.1 * .3)

Now generalise this as [0 to A) and [(1-3A) to 1]

Profit = 2 * A * A / 2 + 2 * A * (1-4A) - 1 * A * 3A

…. = 2A - 10 A*A

Which amazinginly (has a maximum value at A = .1) (simple diff 20A-2 = 0)

(1) Firstly if X checks then it's a 50% game

(2) Secondly if X always bets then Y will call with .25+ (as stands to lose $1 vs gain $3)

(2 a) Hence bad strategy for X to "always" bet.

(3) From above it can be seen Y should always call any bet with expected chance of winning is 25% or more

(4) Is it worth X bluffing ??

(4 a) Sometimes wins pots which he should lose

(4 b) Sometime gets paid off on good hands

(4 c) Against bets lost when "bluff" called.

(5) Consider X bets with [0,A) (bluffing) and (B,1] (good hand)

(5 a) Y will call with a hand that is 1/4 way along the range

(5 b) Obviously the call point has to be in the (B,1] range so bluffs gain

(5 c) Mathematically (A-0) has to be less than 3*(1-B) (i.e. only bluff sometimes).

(6) If A iz zero (i.e. never bluff) Y will call 1/4 into your range

(6 a) Suppose you bet with .6 or more

(6 b) Y will call with .7 or more else fold

(6 c) Where you both have .7 or more, it remains 50-50 (sometimes you win one more, or lose one more)

(6 d) Where you have .6xxx you can win pots you shouldn't (Y = .6xxx) but will lose an extra $1 if Y has a good hand.

(6 e) Since you lose $1 with p=.3 and only gain $2 with p<.1 …

(6 f) Unless you bluff, you are better off not betting.

Suppose you bluff [0 - 0.1) and bet [0.7 - 1]

Obviously if you check then the action has no change on what should have happened.

The figures above mean Y should call with any hands of .7 or above.

Thus if you both have good hands they means sometimes you win 1 more and sometimes you lose 1 more

If you have a good hand and Y doesn't you were destined to win pot, so no change

The advantage comes from the bluffing times.

(a) Y has hand (0 - 0.1) - half the times you've stolen a pot - so profit = $2 * ( .1 * .1 /2)

(b) Y had hand (0.1 - 0.7) - you've stolen a pot - so profit = $2 * (.1 * .6)

(c) Y had hand (0.7-1) - you've lost an extra £1 - so loss = $1 * (.1 * .3)

Now generalise this as [0 to A) and [(1-3A) to 1]

Profit = 2 * A * A / 2 + 2 * A * (1-4A) - 1 * A * 3A

…. = 2A - 10 A*A

Which amazinginly (has a maximum value at A = .1) (simple diff 20A-2 = 0)

August 11th, 2012 at 7:29:31 PM
permalink

It looks like to me that our answers are slightly different, but have the same expected value.

It doesn't matter expected value-wise if player Y calls or folds when "holding 0.1-0.7" since his hand can only beat 25% of the range of X's hands. However by calling on all hands above 0.1, he will have a higher standard deviation than if he only called hands above 0.7. Furthermore, he could play a percentage of the hands between 0.1 and 0.7 and have a standard deviation somehwere in between.

SD(Y calls on all hands above 0.1) = 1.439

SD(Y calls on all hands above 0.7)=1.162

This is under the assumption that I actually got the problem right (at least as far as not using a randomization strategy).

It doesn't matter expected value-wise if player Y calls or folds when "holding 0.1-0.7" since his hand can only beat 25% of the range of X's hands. However by calling on all hands above 0.1, he will have a higher standard deviation than if he only called hands above 0.7. Furthermore, he could play a percentage of the hands between 0.1 and 0.7 and have a standard deviation somehwere in between.

SD(Y calls on all hands above 0.1) = 1.439

SD(Y calls on all hands above 0.7)=1.162

This is under the assumption that I actually got the problem right (at least as far as not using a randomization strategy).

August 12th, 2012 at 4:40:36 AM
permalink

This problem puzzled me, so I wrote a self-learning numerical simulation.

X and Y follow a strategy that they will bet or call based on their drawn number with a given probability.

Blue dots are the bet probability for X, green circles are the call probabilities for Y.

The simulation oscillates a little bit, and the discrete levels of Y depend on the size of the simulation.

Results are:

X should bet on ~0.7 or greater, and bluff on ~0.1 or lower.

Y should call on ~0.7 or greater, and call between 0.1 and 0.7 with increasing probability.

Overall EV for X is 0.09997 (probably 1/10 in an "infinite" version).

This turns out *quite* intersting. Thank you Wizard for this problem. Quite frankly I don't understand much of the "analytical" answers given above, but I admire their beauty (and consistent results). You now have hooked me on poker math.

X and Y follow a strategy that they will bet or call based on their drawn number with a given probability.

Blue dots are the bet probability for X, green circles are the call probabilities for Y.

The simulation oscillates a little bit, and the discrete levels of Y depend on the size of the simulation.

Results are:

X should bet on ~0.7 or greater, and bluff on ~0.1 or lower.

Y should call on ~0.7 or greater, and call between 0.1 and 0.7 with increasing probability.

Overall EV for X is 0.09997 (probably 1/10 in an "infinite" version).

This turns out *quite* intersting. Thank you Wizard for this problem. Quite frankly I don't understand much of the "analytical" answers given above, but I admire their beauty (and consistent results). You now have hooked me on poker math.

August 12th, 2012 at 9:04:58 AM
permalink

When I did the math I got the same X strategy as mentioned above, raise with less than 0.1 or more than 0.7. However, my Y strategy was to call with 0.4 or greater.

I see JTate says Y's counter-strategy is equally good calling from 0.1 to 0.7. I haven't confirmed that, but I've noticed with similar problems that there are often a range of equally good counter strategies, and I can confirm a Y call point of 0.4 has the same EV has 0.1, at 0.1 for X.

Mango's answer is also interesting. It would seem Y can do whatever he wants between 0.1 and 0.7.

I see JTate says Y's counter-strategy is equally good calling from 0.1 to 0.7. I haven't confirmed that, but I've noticed with similar problems that there are often a range of equally good counter strategies, and I can confirm a Y call point of 0.4 has the same EV has 0.1, at 0.1 for X.

Mango's answer is also interesting. It would seem Y can do whatever he wants between 0.1 and 0.7.

Good work gang! I will post a more difficult problem tomorrow.

It's not whether you win or lose; it's whether or not you had a good bet.

August 12th, 2012 at 9:17:43 AM
permalink

I don't understand. :( Need some help please!

How is it you guys are saying the first player should check between .1 and .7?

Suppose, I have .6. If I check, my EV is 1.2. If I raise, and the other guy folds, it is 2. if he calls, it is 1.4. Either way, I get more than 1.2. Why would I not want to bet?

How is it you guys are saying the first player should check between .1 and .7?

Suppose, I have .6. If I check, my EV is 1.2. If I raise, and the other guy folds, it is 2. if he calls, it is 1.4. Either way, I get more than 1.2. Why would I not want to bet?

"When two people always agree one of them is unnecessary"

August 12th, 2012 at 9:25:53 AM
permalink

Quote:WizardIt would seem Y can do whatever he wants between 0.1 and 0.7.

Simulation suggest that Y's response to 0.1 - 0.7 is highly sensitive of X's exact strategy on the bluff. I got extreme switching behaviours between calls and folds on slight variations of X's probability to bluff on exactly 0.1.

From a different poker book I remember the phrase that a perfect bluff's expectation value is independent to the response of the opponent.