Is that true?Quote:NeutrinoI know the break even TC for EV is +1.0, but if you stop counting for the rest of the shoe the TC will statistically tend towards 0.

Exhibit 2: Somewhere (was it on the Wz-Odds site?) I read that some position was better at a blackjack's table. Why would that be?

Situation 1: as a counter, you want to base your decisions on the best info available. Accordingly, I reasoned that the best position was last , since you had more cards revealed. Why is it not true?

Situation 2: as a beginning counter, I just use bet spread, not strategy tables. Not making use of the other players cards, do I have a reason to prefer first position, because their play may affect the counting on which my bet was based?

All in all, here is the mathematical question:

During play, players' decisions affect the remaining deck's content. Their play is not independent of the cards drawn. Is there a bias towards better or worse count? Or does it remain a priori independent? (By which I mean, my prior assessment of deck content is the same whatever the position played)

Quote:kubikulannIs that true?Quote:NeutrinoI know the break even TC for EV is +1.0, but if you stop counting for the rest of the shoe the TC will statistically tend towards 0.

Absolutely not. It's complete nonsense.

Situation 1: It is true, in regards to playing decisions. It is irrelevant when placing bets.

Situation 2: What? Do you mean you don't use indices? How do you not use other players' cards, you have to count them? There is no reason to prefer first position, or in this instance, last position, or any position.

Question: There is no bias However, if the count is good, it tends towards bad. If the count is bad it tends toward good. All in all, it always tends toward neutral and remaining static.

Quote:AxiomOfChoiceQuote:kubikulannIs that true?Quote:NeutrinoI know the break even TC for EV is +1.0, but if you stop counting for the rest of the shoe the TC will statistically tend towards 0.

Absolutely not. It's complete nonsense.

That's a fair question from a newb.

(b) If it was a CSM, then going further into the deck before receiving your cards might mean cards you've counted might appear again. Thus there is a small advantage in knowing your first card is sitting on top and cannot be one of the cards from the previous hand.

(c) Subject to above - there is also the effect of having more information by seeing more cards before you have to play your hand. This would help marginal decisions (e.g. 12 vs 4).

Thus in a shoe you want to be later, in a CSM I'm guessing (c) out-benefits (b) so you'd also prefer to be later.

Quote:SonuvabishThat's a fair question from a newb.

It's more of a math/logic question than a blackjack question, so I'm not sure that experience has anything to do with it.

In other words, if I found a mathematician who had never heard of blackjack, and explained what the term "true count" meant, I'd expect him to answer that question correctly with no hesitation, even without knowing the rules of the game.

On the other hand, I'm sure that there are people who are not good at math but have been playing blackjack successfully for years who would get it wrong.

That is precisely what I call "bias" (in statistical terms).Quote:SonuvabishQuestion: There is no bias However, if the count is good, it tends towards bad. If the count is bad it tends toward good. All in all, it always tends toward neutral and remaining static.

But could you argument your answer. That would be logically true if the cards were drawn at random: if the deck is heavy in good cards, they will tend to appear and the situation returns to neutral.

But my question is on the non-neutrality (non randomness) of the cards being drawn: they depend on strategy, there is no independence between the two cards a player receives and the cards (s)he hits for. So, on the whole, does that dependence lead to a specific evolution of the deck?

Of course, that means short term. In the long run, definitely the RC evens out (not the TC, I understood that). So I'm specifically thinking about one round, for example. Do you expect the count to be better or worse after one just round?

Quote:kubikulannThat is precisely what I call "bias" (in statistical terms).

But I'm not so sure about your answer. That would be logically true if the cards were drawn at random: if the deck is heavy in good cards, they will tend to appear and the situation returns to neutral.

But my question is on the non-neutrality (non randomness) of the cards being drawn: they depend on strategy, there is no independence between the two cards a player receives and the cards (s)he hits for. So, on the whole, does that dependence lead to a specific evolution of the deck?

I am not sure what part of my answer was confusing. Perhaps my math terminology is guilty of improper usage?

There is no dependence on strategy. The cards are random. Taking the cards in a specific order does not affect the composition of the deck in a probabilistic fashion. I do not know exactly why are you stating they are independent and non-random events. Perhaps you are thinking that strategy is purposeful, and non-random, but would be immaterial. Regardless, the answer is clear.

If you're using strategy decisions along with bet spread, go for the last position. If not it doesn't matter.

EDIT - Didn't realize it was true count. I suspect that it would, but the effect would be ridiculously small until the very end of the deck, which doesn't even come up in any game ever.

Someone with a low hand will hit, probably more than once.

A hitter receiving a big card stops hitting, small -> continues.

On the whole, I suspect the apparition of big cards ensures that there are less cards drawn than smaller cards, because of the choices of the players.

So when small cards appear, is there not only more big cards in the deck but also a smaller deck?

Quote:Boney526I disagree with the couple of people who said the count wouldn't tend to go towards 0 f you stopped counting, because obviously at the end of the deck the count will be 0 again. I don't really think that's a useful piece of information, though, since if you missed a part of the deck it'd be more accurate to count those as burned or undealt.

If you're using strategy decisions along with bet spread, go for the last position. If not it doesn't matter.

If the last card is a 10, the TC is 100 before drawing the last card. They don't play to the last card. And the tendency is for the TC to remain flat. It starts at 0, and tends to remain there...once it changes it tends to remain there. I agree, not very useful info. But if you do not understand it and want to, it may help.

Quote:Boney526I disagree with the couple of people who said the count wouldn't tend to go towards 0 f you stopped counting, because obviously at the end of the deck the count will be 0 again. I don't really think that's a useful piece of information, though, since if you missed a part of the deck it'd be more accurate to count those as burned or undealt.

If you're using strategy decisions along with bet spread, go for the last position. If not it doesn't matter.

EDIT - Didn't realize it was true count. I suspect that it would, but the effect would be ridiculously small until the very end of the deck, which doesn't even come up in any game ever.

You suspect it would what? True count tends to remain the same as more cards are dealt -- ie, it is equally likely to go up as it is to go down, regardless of what the current TC is. This is obvious from the definition of true count.

The TC at the end of the shoe is not 0, it is undefined.

Quote:kubikulannSome one who received two tens will not hit.

Someone with a low hand will hit, probably more than once.

A hitter receiving a big card stops hitting, small -> continues.

On the whole, I suspect the apparition of big cards ensures that there are less cards drawn than smaller cards, because of the choices of the players.

So when small cards appear, is there not only more big cards in the deck but also a smaller deck?

What you are seeing as choices of the players, I am seeing as the random emergence of cards. If I could control the strategy of players, I would make them all hit and/split every hand until they busted...then when the count was positive, they'd either quit playing or always stand. Standing on 20 isn't much of a strategy choice. I do not see how their choices could possibly affect the distribution of cards. The count will remain unaltered, and its tendencies will not change...your ability to capitalize could be impaired by their strategy in the short-run.

Quote:SonuvabishIf the last card is a 10, the TC is 100 before drawing the last card. They don't play to the last card. And the tendency is for the TC to remain flat. It starts at 0, and tends to remain there...once it changes it tends to remain there. I agree, not very useful info. But if you do not understand it and want to, it may help.

The OP is trying to use it because he can't count and play at the same time. So he wants to back-count, wait until the count is good, and then jump in and play until the end of the shoe.

Ignoring the cut card effect, this should more or less work, except for the fact that your variance will be much, much higher. You will often be overbetting your BR, which makes it a bad idea.

Well, that assertion is what I'm asking about. As I wrote above, I wonder if the differing strategy in front of large or small cards can affect this assertion, that would be true in pure random draw situations.Quote:AxiomOfChoiceTrue count it is equally likely to go up as it is to go down, regardless of what the current TC is. This is obvious from the definition of true count.

Can you prove it mathematically?

I've tried with the simple Ace/Five count, but if somebody already has the calculations, I'm willing to read them.

Quote:AxiomOfChoice

The TC at the end of the shoe is not 0, it is undefined.

Good point.

Irrelevant...Quote:SonuvabishIf I could control the strategy of players,

If it helps you to understand, consider players using BS only.

Excuse me, you are confusing two threads.Quote:AxiomOfChoiceThe OP is trying to use it because he can't count and play at the same time. So he wants to back-count, wait until the count is good, and then jump in and play until the end of the shoe.

I'm not counting (we have CSM anyway). I'm interested in a mathematical problem.

Quote:kubikulannWell, that assertion is what I'm asking about. As I wrote above, I wonder if the differing strategy in front of large or small cards can affect this assertion, that would be true in pure random draw situations.

Can you prove it mathematically?

I've tried with the simple Ace/Five count, but if somebody already has the calculations, I'm willing to read them.

I'm no mathematician and can't write proofs, but this is something fairly obvious. It just might be the given.

Quote:kubikulannCan you prove it mathematically?

I can, but I'm not about to take the time to write a formal proof.

Stop and think about it for a second. It's really, really obvious.

As for the large vs small cards, just remember that expectation of random variables is additive, regardless of dependence between the variables (in this case, the expectation of the true count). I feel like a broken record repeating this every day, but E(X) + E(Y) = E(X+Y), always.

It is obvious if cards come out at random. But they don't ! They come out faster before a low hand than before a high hand. They come out faster when they are low themselves.

So definitely NO, it is NOT obvious that there should be an equal probability of going up or down.

Quote:kubikulannIrrelevant...

If it helps you to understand, consider players using BS only.

Sorry, I don't know what else to tell you. It has no effect. I am an AP counter, as is AOC. If you are not convinced, then you probably want a different answer.

Quote:kubikulannIf it were obvious, I'd seen it. Allow me some intelligence.

It is obvious if cards come out at random. But they don't ! They come out faster before a low hand than before a high hand. They come out faster when they are low themselves.

So definitely NO, it is NOT obvious that there should be an equal probability of going up or down.

I think you are confusing TC and RC. You mentioned you are playing at a CSM; if so, I don't really understand the context.

I do repeat it day after day to my students:Quote:AxiomOfChoiceI feel like a broken record repeating this every day, but E(X) + E(Y) = E(X+Y), always.

E(X|A) + E(Y|B) does not equal E(X+Y)

You need the priors of A and B.

Furthermore, your "obvious" assertion seems to say that E(X)=E(Y) which is untrue.

Quote:kubikulannIf it were obvious, I'd seen it. Allow me some intelligence.

Smart people miss obvious stuff all the time. That is why I asked you to stop and think about it.

Quote:It is obvious if cards come out at random. But they don't ! They come out faster before a low hand than before a high hand. They come out faster when they are low themselves.

So definitely NO, it is NOT obvious that there should be an equal probability of going up or down.

The speed that the cards come out is irrelevant. Next you will be telling me that you have a betting system to make money in a 0EV game.

Do you agree that, if the current remaining shoe composition has a true count of n, and I draw exactly one card, the expectation of the true count after drawing that card is still n? So the expectation of the effect of that one card on the true count is 0.

Now suppose I have some system where I will decide whether to draw or not draw more cards based on what cards were previously drawn. How will you compute the expected change in true count? 0, plus 0, plus 0... you see where this is going.

Quote:kubikulannI do repeat it day after day to my students:

E(X|A) + E(Y|B) does not equal E(X+Y)

You need the priors of A and B.

Furthermore, your "obvious" assertion seems to say that E(X)=E(Y) which is untrue.

Although I do not understand his math talk, I am fluent enough to know that is clearly not what the equation means. Lets make the X = 1 and Y = 2. E(1) + E(2) = E(3)

We are just trying to help you. Don't get mad because you are not getting the answer you want.

If, in a round, there are many "good" hands (with 9's, 10's, A's) there will be few Hits. At the end of the round, there are relatively less good cards in the deck (running count) but also many cards left, so that your true count is less badly affected.

If there were many "bad" hands (small cards) there have been many Hits. If those Hits were themselves low cards, at the end, there are reatively better cards in the deck, but also less cards overall, which increases your TC in two ways.

So in case one your negative evolution is tempered by the division, in the second your positive evolution is enhanced by division. Overall, TC should be better on average.

IF... the prior probabilities of cases One and Two confirm it.

Do they?

Quote:kubikulannI do repeat it day after day to my students:

E(X|A) + E(Y|B) does not equal E(X+Y)

You need the priors of A and B.

Yes, this is a good point. Be sure to remember to multiply 0 by the conditional probability that you end up drawing the card before adding it.

so instead of 0 + 0 + 0 + ....

we have 0 * p(A) + 0 * p(B) + 0 * p(C) + ...

Oh, look, they are the same...

Please don't be insulting. Why such hatred when you can't understand something? Ego problems?Quote:AxiomOfChoiceNext you will be telling me that you have a betting system to make money in a 0EV game.

May I respectfully point out that you do not seem to take into account the difference between prior and conditional probabilities. Before I saw the card, expectations are unchanged. AFTER I saw the card, it has changed.Quote:AxiomOfChoiceDo you agree that, if the current remaining shoe composition has a true count of n, and I draw exactly one card, the expectation of the true count after drawing that card is still n?

Now, if it is not ONE card, but the number of cards depends on what people have seen, Then definitely the independence is gone.

Quote:kubikulannMay I respectfully point out that you do not seem to take into account the difference between prior and conditional probabilities. Before I saw the card, expectations are unchanged. AFTER I saw the card, it has changed.

Now I'm not sure that we are answering the same question.

The question I am answering is this:

Supposing I back-count a blackjack game, and observe that the true count is n. What is the expectation of the true count after the next hand is played? What is the expectation of the true count after the next two hands are played? The next 3?

The answer to all those questions is n.

The only caveat is that the number of additional hands to be dealt will depend on how the count moves, so it may be that the 3rd hand is played only if the count goes down, but not if it goes up (so if we restrict the question only to the cases where it is dealt, we are introducing bias and the expectation is no longer n. This is precisely the cut card effect. It's still true that for all hands from now until the end of the shoe, the expected true count is n, but you may play more hands if the count goes down than if it goes up, so your expectation if you play until the end of the shoe may not be so trivial to calculate)

Quote:kubikulann

Before I saw the card, expectations are unchanged. AFTER I saw the card, it has changed.

Now, if it is not ONE card, but the number of cards depends on what people have seen, Then definitely the independence is gone.

Expectations of what? If the count changes, then it changed. Your new expectation is the same as is was before--the count will not change. Are you trying to determine the likelihood that the count will change? It constantly fluctuates. It has already been said that the count tends to remain static. Changes tend to be incremental. The running count tends toward zero. I give up. Good luck with your analysis.

Recall that TC = RC * 52 / (number of cards left)

Suppose we are left with 25 high cards, 15 low cards, and 12 neutral cards. TC = 10 * 52 / 52 = 10.

What is the expectation of the true count after drawing 1 card?

TC_h = TC after drawing a high card is 9*52/51

TC_n = TC after drawing a neutral card is 10*52/51

TC_l = TC after drawing a low card is 11*52/51

E(TC after drawing a card)

= (25/52 * TC_h) + (15/52 * TC_l) + (12/52 * TC_n)

= (25/52 * 9*52/51) + (15/52 * 11*52/51) + (12/52 * 10*52/51)

= 510*52 / 52*51

= 510 / 51

= 10

By replacing numbers with variables, it is not hard to show that for any deck with more than 1 card, E(TC after drawing a card) = current TC. I leave this as an exercise to the reader.

Ok, now suppose we draw one card, and, if it's a high card, we draw another. If it's neural or low, we draw no more cards.

To get the expected true count after this sequence of 1 or 2 draws (where the 2nd draw is conditional on the first), in the bolded line above, we reply TC_h with the expected TC after drawing a card from the deck with 24 high cards, 15 low cards, and 12 neutral cards. But this is the same as TC_h, by what we established above. So, the answer is still 10. The answer will continue to be 10 no matter how we decide to draw or not draw cards (unless we deplete the whole deck, since the TC on an empty deck is undefined) because, by conditionally drawing more cards, all we are doing is conditionally replacing some true count with the expected true count after drawing one more card, ie, conditionally replacing a value with the same value. We are replacing x with p*x + (1-p)*x.

That is as close to a formal proof as I'm going to get. Replacing numbers with variables and doing some annoying algebra will give you your formal proof.

Quote:Boney526I disagree with the couple of people who said the count wouldn't tend to go towards 0 f you stopped counting, because obviously at the end of the deck the count will be 0 again. I don't really think that's a useful piece of information, though, since if you missed a part of the deck it'd be more accurate to count those as burned or undealt.

Oh no... not again. The running count tend towards zero, true. The remaining decks N also. After the last card dealt (if it would), RC=0, but N=0 also. TC = RC / N is not defined here ... claiming it is zero is simply a false statement.

The essence is, the shoe is assumed to be shuffled well, and any undealt card is likely to be anywhere in the remaining shoe with equal probability. The true count is a precisely defined property of the shoes remaining composition - which statistically speaking is the same for any portion of the undealt shoe.

Not counting any part of the shoe is equivalent as taking that part out of the shoe and placeing it in the discard unseen. Hence the true count - as a property of the remaining cards - stay the same when you miss cards. The only adjustment is to take as N all unseen cards, not the cards left in the shoe.

( Note: is that what you called "obvious"? )

Quote:kubikulannThanks, Axiom.

( Note: is that what you called "obvious"? )

Yes, IMO it is intuitively obvious. The proof is long but it's all simple high school algebra. I understood why it was true before I wrote down the proof.

There may be a simpler way to state it in terms of expectations of random variables.

Quote:kubikulannThanks, Axiom.

( Note: is that what you called "obvious"? )

I used to let AOC hang out with me if he did my math homework and bought me cigarettes. Then I crashed his car and abandoned it as a practical joke, and we haven't hung out since.

Quote:AxiomOfChoiceYou suspect it would what? True count tends to remain the same as more cards are dealt -- ie, it is equally likely to go up as it is to go down, regardless of what the current TC is. This is obvious from the definition of true count.

The TC at the end of the shoe is not 0, it is undefined.

I suspect that on average, if you started at a True Count of 5 say 5 decks in, then the average true count, say 2 decks, in would be less than 5 - since the closer you get until the end of the deck, the closer you should get to (an average count of) 0 - before it becomes undefined as you pointed out.

I mean even if I'm right it's not useful information because you're never going to come across a scenario to utilize it.

Quote:Boney526I suspect that on average, if you started at a True Count of 5 say 5 decks in, then the average true count, say 2 decks, in would be less than 5 - since the closer you get until the end of the deck, the closer you should get to (an average count of) 0 - before it becomes undefined as you pointed out.

No, that is incorrect. The expected value of the true count remains at 5. It does not get closer to 0. I'm not sure why you think that it should.

If the true count is 5, that means that there are 5 extra high cards per deck left. If there are 3 decks left, that means that the running count is 15.

If you deal one out of those 3 decks, you would expect an extra 5 high cards to be dealt. Of course there may be more or less, but the expected value of the number of extra cards to come out is 5. That leaves the expected running count at 10, which leaves the expected true count at 5, since there are now 2 decks left.

If you deal another deck, you would expect another 5 extra high cards to come out, leaving the expected running count at 5 with 1 deck left, which would leave the expected true count at 5.

No matter how many cards you deal, the expected value of the expected true count remains 5.

I am really sorry but that statement is not answering the problem.Quote:AxiomOfChoiceNo, that is incorrect. The expected value of the true count remains at 5. It does not get closer to 0. I'm not sure why you think that it should.

If the true count is 5, that means that there are 5 extra high cards per deck left. If there are 3 decks left, that means that the running count is 15.

If you deal one out of those 3 decks, you would expect an extra 5 high cards to be dealt. Of course there may be more or less, but the expected value of the number of extra cards to come out is 5.

It would be if the cards dealt were dealt at random. This, I agree. Consequently, I also agree that, anticipating the moment when one more deck is dealt, the TC would be expected to be stationary. The argument is based on the fact that the cards are ordered randomly. OK.

But my point is that the number of cards dealt during a round depends on the value of the cards dealt. For example, if the high cards happen to be clustered in the beginning of your remaining decks, the players will tend to ask for fewer hits. Can you confidently say that the TC after these few hands is stationary?

Here we are not anticipating TC at a future deck-size stage, but at the next round.

In your numerical example, there is an expected 5 high cards dealt from one deck. OK. But that moment may come in the middle of a playing hand.

Is there an expected 5/52 of the total cards dealt after the completing of a number of hands?

N.B. I don't know the answer. I'm just saying it is NOT obvious and requires some calculation.

Quote:kubikulannI am really sorry but that statement is not answering the problem.

It would be if the cards dealt were dealt at random. This, I agree. Consequently, I also agree that, anticipating the moment when one more deck is dealt, the TC would be expected to be stationary. The argument is based on the fact that the cards are ordered randomly. OK.

But my point is that the number of cards dealt during a round depends on the value of the cards dealt. For example, if the high cards happen to be clustered in the beginning of your remaining decks, the players will tend to ask for fewer hits. Can you confidently say that the TC after these few hands is stationary?

Here we are not anticipating TC at a future deck-size stage, but at the next round.

In your numerical example, there is an expected 5 high cards dealt from one deck. OK. But that moment may come in the middle of a playing hand.

Is there an expected 5/52 of the total cards dealt after the completing of a number of hands?

N.B. I don't know the answer. I'm just saying it is NOT obvious and requires some calculation.

When each card dealt does not change the expected count, the number of cards dealt is irrelevant. The fact that the number of cards dealt depends on the value of the previous card is irrelevant. You are still adding 0s together; all that you are changing is the number of 0's that you add together.

Do you really teach math? At what level? I'm sorry but this should be extremely obvious to anyone with a university-level math education or higher. Saying "the expectation of each card is 0, and expectation is additive, regardless of the dependence between the variables" should be enough to explain it. E(X) + E(Y) = E(X+Y) is a hammer; 95% of the gambling questions asked on this forum (including this one) are nails.

Here's a puzzle: n people arrive at a party and leave their hats at the door (each person is wearing 1 hat). As they leave, they each grab a hat at random (uniformly distributed) from the hats that are left. After everyone has left, what is the expected number of people who got their own hat back (as opposed to someone else's hat)? If it takes you more than two seconds to blurt out that

And, why are you still on this? I proved it in a previous post. Was that not sufficient?

No, I'm teaching statistics. Maybe that's why you can't grasp. It's more complicated than math.Quote:AxiomOfChoiceDo you really teach math?

Repeating a statement with much force has never been a proof of validity, in my book.

Your choice of the (famous) hats example shows you are treating another problem.

Look, I have set up a simplified problem, and the result clearly shows the expected relative composition of the deck changes after a player's hand.

Three types of cards : values are 10, 6 and 3.

A player receives one card, then chooses to hit (several times if wanted) or stand.

You bust if you go past 10.

Say that the player's strategy is :

" Hit on 3-6, stand on 7-10 "

Develop the decision tree, with associated probabilities. With each possible play, you end up with a new composition.

Compute the expected posterior composition.

(Absolute composition is RC; relative composition is TC.)

According to your statement, this expected posterior should be equal to the prior ("adding zeros").

Well, the figures show otherwise: there is an expectation of a slightly lower proportion of 10's and a slightly higher proportion of 3's.

Do the math ! Otherwise, I'm afraid you will stubbornly hold to your mantra E(X)+E(Y), as if that exempted you from calculation.

*Note: your mantra is correct. It's just not applicable in this instance.

Quote:kubikulannNo, I'm teaching statistics. Maybe that's why you can't grasp. It's more complicated than math.

Repeating a statement with much force has never been a proof of validity, in my book.

Your choice of the (famous) hats example shows you are treating another problem.

Look, I have set up a simplified problem, and the result clearly shows the expected relative composition of the deck changes after a player's hand.Start with an arbitrary deck composition (for instance 10 of each, or more generally p, q & r ). That is the prior composition.Three types of cards : values are 10, 6 and 3.

A player receives one card, then chooses to hit or stand.

You bust if you go past 10.

Say that the (basic) strategy is :

" Hit on 3-6, stand on 7-10 "

Develop the decision tree, with associated probabilities. With each possible play, you end up with a new composition.

Compute the expected posterior distribution.

According to your statement, this expected posterior should be equal to the prior ("adding zeros").

Well, the figures show otherwise: there is an expectation of a slightly lower proportion of 10's and a slightly higher proportion of 3's.

Do the math ! Otherwise, I'm afraid you will stubbornly hold to your mantra E(X)+E(Y), as if that exempted you from calculation.

You keep talking about how the number of cards is variable. I keep pointing out that this is true, but also irrelevant, because each of them have an expected value of 0. It's like going to a roulette wheel with no zeros and saying that you are going to keep betting on red until you win twice, and then stop, and asking if that changes your expectation. Of course it doesn't; you can add 0's as much or as little as you want, conditionally, or not, and you still end up with 0. If you have a 47.248% chance of adding another 0, that's still 0.

If you think that statistics is more complicated than math then you obviously haven't done any real math. That is the difference here; you are insisting on using brute force calculation because you are a statistician (or, at least, a statistics teacher) and you don't understand the simple theory (ie, the math). I am a mathematician, so I understand the theory and therefore refuse to waste my time with long calculations, because I already know what the result will be when I add up a bunch of 0's. Feel free to add them up, though, and tell me what you come up with. Although, be a little more careful when you add them up, since you made a mistake with your example. That tends to happen when you insist on a brute force approach instead of taking the time to understand the math.

I don't think that. I was saying it was more complicated for you, because I'm fed up with your insulting innuendos.Quote:AxiomOfChoiceIf you think that statistics is more complicated than math then you obviously haven't done any real math.

I'm fed up of people chanting their mantra just saying "it's obvious, if you don't see it you're an idiot."

Nothing is obvious. You'll never convince me unless you offer a mathematical proof. It's the one who is convinced without a proof who is an idiot.

It is also "obvious" that the Earth does not move, isn't it? Well, it's wrong !

It is YOU who are similar to all those system proponents who claim it works but never provide the details.

Quote:kubikulannI don't think that. I was saying it was more complicated for you, because I'm fed up with your insulting innuendos.

I'm fed up of people chanting their mantra just saying "it's obvious, if you don't see it you're an idiot."

Nothing is obvious. You'll never convince me unless you offer a mathematical proof. It's the one who is convinced without a proof who is an idiot.

It is also "obvious" that the Earth does not move, isn't it? Well, it's wrong !

It is YOU who are similar to all those system proponents who claim it works but never provide the details.

My time is valuable to me, so how about this:

I will bet you 5000 US dollars (or more, if you'll agree to it) that I can provide a rigorous mathematical proof that playing a hand does not change the expected value of the true count. I will take the time to write up the proof and post it here. We will agree beforehand on a trusted party who will settle any dispute as to whether the proof that I provide is rigorous and correct. I'd nominate teliot, if he'd agree (since he is a real mathematician, with experience in academia, and is therefore used to reading and writing proofs. I have been critical of him in the past so you can trust that he will give me no "special treatment"). If he wishes to charge a fee for his validation (which I would, if I were him -- his time is valuable too) the loser of the bet must pay for it, in addition to paying the winner of the bet (so the fee does not come out of the amount of the bet; it is extra). Of course the fee can be avoided if the loser simply accepts that the proof is correct/incorrect without involving the judge, and pays off the bet.

I do not suggest that you take this bet. You will lose, for sure.

If you wish to get my work for free, I'm not going to take the time to write up a formal proof of what amounts to the statement that 0x + 0f(x) = 0 for all functions f and all values of x, so long as f is defined at x. However, I'd be happy to refute the example that you gave; perhaps this will convince you. for your simplified games, simply tell me which initial distribution you feel leads to a different expected distribution after the hand is played, and I will show the calculation that proves you wrong. I am letting you choose the initial distribution simply so you don't think that I'm cherry-picking one.

Don't bother, I can make the proof myself at no charge. Being a teacher, I have a large provision of free time, have I not?

Moreover, I said I do not know what the answer is, so I would certainly not bet on one result against the other.

In the sex-biased society problem, I have produced two cases. One agrees with your feeling (no change in expected value); the other leads to a different answer. My next step is to determine which is applicable to the blackjack situation.

I'll check my back-of-the-envelope equations and post them here.

Three card values, 10, 6 and 3. The deck composition is p 10's, q 6's and r 3's. Define n as deck size (p+q+r).

The table shows all possible cases according to the defined strategy.

1st card | Prob | 2d card | Probab | 3d card | Probab | New p | New q | New r |
---|---|---|---|---|---|---|---|---|

10 | p/n | - | 1 | - | 1 | p-1 | q | r |

6 | q/n | 10 | p/n-1 | - | 1 | p-1 | q-1 | r |

6 | q/n | 6 | q-1/n-1 | - | 1 | p | q-2 | r |

6 | q/n | 3 | r/n-1 | - | 1 | p | q-1 | r-1 |

3 | r/n | 10 | p/n-1 | - | 1 | p-1 | q | r-1 |

3 | r/n | 6 | q/n-1 | - | 1 | p | q-1 | r-1 |

3 | r/n | 3 | r-1/n-1 | 10 | p/n-2 | p-1 | q | r-2 |

3 | r/n | 3 | r-1/n-1 | 6 | q/n-2 | p | q-1 | r-2 |

3 | r/n | 3 | r-1/n-1 | 3 | r-2/n-2 | p | q | r-3 |

(For the expectations, I have extracted the common term p (resp. q or r) and manipulated the "-1", "-2" etc.)

E(New p)= p - [p(n-1) + pq + pr + pr(r-1)/(n-2)] / n(n-1)

E(New q)= q - [pq + 2q (q-1) + qr + rq + qr(r-1)/(n-2)] / n(n-1)

E(New r)= r - [qr + rp + rq + r(r-1)( 2p+2q+3(r-2) )/(n-2)] / n(n-1)

Now if the quotient of change E(New x)/x is different for each value, then evidently the relative proportions are not in the p/q/r ratio anymore. You can easily verify this by giving specific values to p, q and r.

The expectation of the relative composition of the deck after one hand is played, is that there are relatively more 3's and relatively less 10's.

This is counter-intuitive, like many probabilistic results.

Never trust what you think obvious when dealing with probabilities.

Yet... AxiomOfChoice is correct in one respect. I'll leave it to him to point what. ;-)