June 4th, 2012 at 2:30:33 PM
permalink
Heavy,
I was reading something today that reminded me of you. I wanted to discuss.
"Many [people] have an aversion to statistics. Some may even prefer to believe that statistics are a game played by academics that is only tangentially related to their own practice. Therefore, they do not pursue expertise in statistics. But avoiding statistics constitutes a disservice to their own intellect, and renders [participants] vulnerable to illusions and misinformation.
Some [participants] might have the impression that statistics are used (or misused) to prove a conjecture. This is not correct. Truth cannot be proved. However, falsehoods can be proved; and the role of statistics is to refute false conjectures. Statistics serve [participants] not by proving truth but by protecting them from errors and false assertions."
I think with some astute use of statistics, we (and when I say we, I really mean you) can protect against errors and false assertions. Many on this board have asserted your claims as false. Let's use stats to refute the perhaps false conjectures.
The article goes on to explain type I errors (and type II but we will skip that) and it's importance. I will quote here.
"There is a risk that a difference, between an index intervention and a control intervention, might be detected when such a difference should not occur. When it does occur, it is known as type I error. It arises when, by chance alone, the subjects who undergo the index treatment happen to be a group destined to respond extraordinarily well; or when the subjects recruited for the controlled group are ones destined to respond extraordinarily poorly. The differences in the samples, not the strength of the intervention, account for the differences in outcome. The risk of this occurring can be calculated, and the prevailing convention is to keep this risk to less than 0.05; hence the expression: P < 0.05. That value means that there is always a 1 in 20 chance of a rogue result due to aberrations in samples. That is why repeat studies are mandatory, not so much to confirm the result, but to refute rogue results that arise from an unrepresentative sample. In other words, an intervention is not proved by repeat studies, but the credibility of that intervention rises by default if and when repeated attempts to disprove it fail."
That is a little technical, but basically, if we design a study, and you carry it out - depending on our the design, it would be unlikely (less than a 5% chance) that the results were due to chance. When you do the study, then repeat it and fail to prove the null hypothesis (you show a difference), the credibility of controlling dice rises significantly.
So please, get a camera, do the study. It won't prove that you can repeat it in a casino, but IT WILL maybe show that with your dice, in your house, with your table, you can control the outcome of the dice.
So, using an alpha of 0.05%, and a beta of 20%, and assuming a 7 hits normal at 16.6667%, and you can influence it to hit 18% (I have no idea if this is the case since you won't provide that info to me), you will need a dice control group of 12802 (12000 rolls of influenced dice) and a non-control group of 12802 (12000 normally thrown dice) to show a difference.
This is based on categorical data. The numbers are significantly less using a numeric scale (but you need more info like SD, etc) - and I am not a stats man so I don't know what is more appropriate - categorical or numerical.
Anyway, I think it would be fun for you to do - and revealing.
I was reading something today that reminded me of you. I wanted to discuss.
"Many [people] have an aversion to statistics. Some may even prefer to believe that statistics are a game played by academics that is only tangentially related to their own practice. Therefore, they do not pursue expertise in statistics. But avoiding statistics constitutes a disservice to their own intellect, and renders [participants] vulnerable to illusions and misinformation.
Some [participants] might have the impression that statistics are used (or misused) to prove a conjecture. This is not correct. Truth cannot be proved. However, falsehoods can be proved; and the role of statistics is to refute false conjectures. Statistics serve [participants] not by proving truth but by protecting them from errors and false assertions."
I think with some astute use of statistics, we (and when I say we, I really mean you) can protect against errors and false assertions. Many on this board have asserted your claims as false. Let's use stats to refute the perhaps false conjectures.
The article goes on to explain type I errors (and type II but we will skip that) and it's importance. I will quote here.
"There is a risk that a difference, between an index intervention and a control intervention, might be detected when such a difference should not occur. When it does occur, it is known as type I error. It arises when, by chance alone, the subjects who undergo the index treatment happen to be a group destined to respond extraordinarily well; or when the subjects recruited for the controlled group are ones destined to respond extraordinarily poorly. The differences in the samples, not the strength of the intervention, account for the differences in outcome. The risk of this occurring can be calculated, and the prevailing convention is to keep this risk to less than 0.05; hence the expression: P < 0.05. That value means that there is always a 1 in 20 chance of a rogue result due to aberrations in samples. That is why repeat studies are mandatory, not so much to confirm the result, but to refute rogue results that arise from an unrepresentative sample. In other words, an intervention is not proved by repeat studies, but the credibility of that intervention rises by default if and when repeated attempts to disprove it fail."
That is a little technical, but basically, if we design a study, and you carry it out - depending on our the design, it would be unlikely (less than a 5% chance) that the results were due to chance. When you do the study, then repeat it and fail to prove the null hypothesis (you show a difference), the credibility of controlling dice rises significantly.
So please, get a camera, do the study. It won't prove that you can repeat it in a casino, but IT WILL maybe show that with your dice, in your house, with your table, you can control the outcome of the dice.
So, using an alpha of 0.05%, and a beta of 20%, and assuming a 7 hits normal at 16.6667%, and you can influence it to hit 18% (I have no idea if this is the case since you won't provide that info to me), you will need a dice control group of 12802 (12000 rolls of influenced dice) and a non-control group of 12802 (12000 normally thrown dice) to show a difference.
This is based on categorical data. The numbers are significantly less using a numeric scale (but you need more info like SD, etc) - and I am not a stats man so I don't know what is more appropriate - categorical or numerical.
Anyway, I think it would be fun for you to do - and revealing.