Quote:OnceDeartry this. extend this table...

tosses Maximum possible count difference between heads and tails Number of possible outcomes Number of possible outcomes Average count difference between heads and tails 0 0 1* =(0*0)/1 0 1 1 2 =(1*2)/2 1 2 2 4 =(2*2+0*2)/4 1 3 3 8 =(3*2+1*6)/8 1.5 4 4 16 =(4*2+2*8+0*6)/16 1.5 5 5 32 =(5*2+3*10+1*20)/32 1.875 6 6 64 =(2*6+12*4+30*2+20*0)/64 1.875

So...

The Maximum possible difference in count between heads and tails increases with count of tosses.

and

The average difference in count between heads and tails increases with count of tosses.

Reconcile that with the difference in count approaching zero, which is your assertion.

* if you toss 0 times you get 1 possible outcome where it lands neither headfs nor tales. Live with it!

horrifically bad math. Has nothing to do with the question at hand. How many times does it converge to the mean? I think this is the infinite losses in a row argument that I already gave you the 0.000...% chance of. And mathematicians don't even think that 0.000...% exists. Funny huh.

Quote:JyBrd0403I think you leave that part of the LLN out of your example, it doesn't end until reaching the mean, in this case 50%. That's the only thing that ends it. Otherwise, you will continue to get closer and closer.

Yes. You keep getting closer and closer to exactly 50%, but you're not guaranteed to actually hit it. Most people will agree that 49.999999999999999999999999999999999999% is very close to 50%. If you need closer you can keep flipping until you get 1000 9's after the decimal place and you can reach these figures with an imbalance between heads and tails.

If you are still struggling to see this, James Grosjean did a blog http://www.gamblingwithanedge.com/the-denominator-where-due-happens a few years ago the explains it very well.

"Just to finish the thought here. The LLN states that the average of the results (or win percentage) will become closer to the expected value the more trials performed. So, I did a little math experiment, starting at 10,000 trials with a win percentage of 49.1%, or -90 units from 0. I added a .001 percent increase to the win percentage every 10,000 trials. So, a modest move closer to 50% every 10,000 trials. Here's what I got.

10,000 trial intervals.

10k x .491 = 4,910 wins -90 from 0

20k x .492 = 9,840 wins -160 from 0

30k x .493 = 14,790 wins -210 from 0

40k x .494 = 19,760 wins -240 from 0

50k x .495 = 24,750 wins -250 from 0

60k x .496 = 29,760 wins -240 from 0

70k x .497 = 34,790 wins -210 from 0

80k x .498 = 39,840 wins -160 from 0

90k x .499 = 44,910 wins -90 from 0

As you can see, the longer you play the closer the win percentage gets to 50%, and the closer to 50% you get, the closer you get to 0."

I think that Excel thing these guys talk about can do this for you.

Quote:JyBrd0403But, you didn't actually do the math did you. If you actually do the math, you get this.

"Just to finish the thought here. The LLN states that the average of the results (or win percentage) will become closer to the expected value the more trials performed. So, I did a little math experiment, starting at 10,000 trials with a win percentage of 49.1%, or -90 units from 0. I added a .001 percent increase to the win percentage every 10,000 trials. So, a modest move closer to 50% every 10,000 trials. Here's what I got.

10,000 trial intervals.

10k x .491 = 4,910 wins -90 from 0

20k x .492 = 9,840 wins -160 from 0

30k x .493 = 14,790 wins -210 from 0

40k x .494 = 19,760 wins -240 from 0

50k x .495 = 24,750 wins -250 from 0

60k x .496 = 29,760 wins -240 from 0

70k x .497 = 34,790 wins -210 from 0

80k x .498 = 39,840 wins -160 from 0

90k x .499 = 44,910 wins -90 from 0

As you can see, the longer you play the closer the win percentage gets to 50%, and the closer to 50% you get, the closer you get to 0."

I think that Excel thing these guys talk about can do this for you.

Haven't we had this conversation before?

Anyway, it seems to me that we are now talking about two different, although related, things.

You can have the win percentage get closer to 50% while the difference between wins and losses differs. For example, if you win 4900 of the first 10,000 tosses, and then win 4950 out of every 10,000 after that, the percentage starts at 49 and increases toward 49.5, but the difference increases by 100 after every 10,000 tosses. Nothing "bad" about that math.

On the other hand, we seem to agree that "eventually" a 50/50 bet will get to the point where wins = losses. However, this makes some assumptions, like the person placing the bets is immortal. "Eventually" is a long, long, long, did I mention "long", long time. Speaking of computer simulations, let me run some to see how long of a run I would need to get back to the break-even point.

Edit: I just ran one, and got two runs where it took 11.92 billion and 15.19 billion tosses, respectively, to get back to 50%. The first one had a heads/tails difference as high as 170,218, and the second as high as 199,861. At one toss per second, the second one took over 480 years.

There's something to think about:

Let "point A" be the starting point, and "point B" the first point where there are 100,000 more heads than tails since A.

Eventually, you "should" get back to zero from A, but that means that there would be 100,000 more tails than heads since B. Let this be "point C".

Getting back to zero from B means you're back to 100,000 more tails than heads since point C. You end up going back and forth between "100,000 more heads than tails" and "100,000 more tails than heads."

Just to repeat it, with emphasis this time:

Quote:Richard A. Epstein, "The Theory of Gambling and Statistical Logic", p. 28The law of large numbers has frequently been cited as the guarantor of an eventual head-tail balance. Actually, in colloquial form, the law proclaims that the difference between the number of heads and the number of tails thrown may be expected to increase indefinitely as the number of trials increases, although by decreasing proportions.

The proportion of heads/tales converges toward 50%, but the absolute difference increases indefinitely. That's because the denominator (the number of trials) grows faster than the numerator (the difference). It's not an inconsistency if you understand the distinction between the absolute difference in outcomes and that difference's proportion of the total number of trials.

Quote:MathExtremistHeh, I wake up to see a four-page argument over whether the quote I published on page 1 is correct.

Just to repeat it, with emphasis this time:Quote:Richard A. Epstein, "The Theory of Gambling and Statistical Logic", p. 28The law of large numbers has frequently been cited as the guarantor of an eventual head-tail balance. Actually, in colloquial form, the law proclaims that the difference between the number of heads and the number of tails thrown may be expected to increase indefinitely as the number of trials increases, although by decreasing proportions.

The proportion of heads/tales converges toward 50%, but the absolute difference increases indefinitely. That's because the denominator (the number of trials) grows faster than the numerator (the difference). It's not an inconsistency if you understand the distinction between the absolute difference in outcomes and that difference's proportion of the total number of trials.

Horrifically bad math. The absolute difference will not continue to grow indefinitely (as I showed in the post above). What will happen is that a wave will form. The "raw numbers" will grow, reach an apex, and come back down to the mean. That will always happen. If the win percentage continues to get closer and closer to the mean, then the "raw numbers" will form a wave. Richard A Epstein is dead wrong, and anyone can use an Excel spreadsheet to show that.

I'm blocking you ME, so don't bother responding to me. I have no desire to read any more of the bad math that you have been spewing out for years now.

Um, no. Epstein is perfectly correct, and so is the statistics department at Cal Berkeley:Quote:JyBrd0403Horrifically bad math. ... Richard A Epstein is dead wrong

Quote:UC Berkeley Glossary of Statistical TermsThe Law of Large Numbers says that in repeated, independent trials with the same probability p of success in each trial, the percentage of successes is increasingly likely to be close to the chance of success as the number of trials increases. More precisely, the chance that the percentage of successes differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials n goes to infinity, for every number e > 0. Note that in contrast to the difference between the percentage of successes and the probability of success, the difference between the number of successes and the expected number of successes, n*p, tends to grow as n grows.

http://www.stat.berkeley.edu/~stark/SticiGui/Text/gloss.htm