Yeah, it’s just that polls are notorious for saying something grand, and then when you dig into it it’s always some bizarre, unbelievable process with a minuscule percentage of the population.
As we see, the article doesn’t say how they did it.
If we broadly take the population of Europe and Ukraine to be 788 million people total, this survey of 20,000 people would be 0.002% of the population.
I don’t think that’s statistically significant. By, well, a lot.
It is, population size has essentially no effect on statistical significance of a sample (other than amplifying it as you start to sample most of the population). 20,000 people is massive and will give you sub 1% confidence ranges. The difficulty is ensuring you have a representative sample (no one does) and correcting for the biases you do have in your sample.
Do you really think the huge polling industry is unaware of basic statistics and your dividing the sample size by the population would come as a revaltion to them?
So why did you say that 20,000 was far too small a sample for Europe if you accept the maths which shows that that size sample gives <1% margin of error for that sample size?
In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power.
You see how the elements listed there (cost, time, convenience, and ‘sufficient statistical power’) are more qualitative measurements and not known constants? (I mean, whenever it starts with, “In practice . . .” you know it means “in a perfect system devoid of unknowns”, or in other words “ideally but you’ll see it doesn’t work exactly like that” )
What is the sufficient statistical power for sampling Europe? 0.002%? Two thousandths of a single percent? That greenlights your findings? Okay. I disagree. Polling companies don’t disagree because in this case, as you noted, 20k is an amazing sample size. The cost and time for that - not to mention the convenience! - alone is amazing . . for an opinion poll. No doubt they’re proud, that’s a fine achievement for an opinion poll. Now: did they measure what they set out to measure? I doubt it, but since the methodology given is the single word “online”, I remain skeptical.
And saying “but there’s math in it!” is exactly why I’m skeptical. That effectively means nothing, and it’s used to validate whatever conclusions were presented. “We ran the numbers, and . . ” can mean very specific things, and in some contexts it is good enough to move on to the conclusions. Polls trade on that, but they don’t deserve to.
There’s a lot of info and graphs, but it’s interesting:
Yeah, it’s just that polls are notorious for saying something grand, and then when you dig into it it’s always some bizarre, unbelievable process with a minuscule percentage of the population.
As we see, the article doesn’t say how they did it.
If we broadly take the population of Europe and Ukraine to be 788 million people total, this survey of 20,000 people would be 0.002% of the population.
I don’t think that’s statistically significant. By, well, a lot.
But it is an interesting headline.
It is, population size has essentially no effect on statistical significance of a sample (other than amplifying it as you start to sample most of the population). 20,000 people is massive and will give you sub 1% confidence ranges. The difficulty is ensuring you have a representative sample (no one does) and correcting for the biases you do have in your sample.
Do you really think the huge polling industry is unaware of basic statistics and your dividing the sample size by the population would come as a revaltion to them?
I think the huge polling industry is based on, and a provider of, multiple lies.
I think the average person, who will incorporate polling headlines into their worldview, is unaware of the enormous differences.
Do those lies include tricking professional mathematicians into thinking their lies are actually formally proved mathematics?
No. The math is sound. The premise is flawed.
So why did you say that 20,000 was far too small a sample for Europe if you accept the maths which shows that that size sample gives <1% margin of error for that sample size?
From your link:
You see how the elements listed there (cost, time, convenience, and ‘sufficient statistical power’) are more qualitative measurements and not known constants? (I mean, whenever it starts with, “In practice . . .” you know it means “in a perfect system devoid of unknowns”, or in other words “ideally but you’ll see it doesn’t work exactly like that” )
What is the sufficient statistical power for sampling Europe? 0.002%? Two thousandths of a single percent? That greenlights your findings? Okay. I disagree. Polling companies don’t disagree because in this case, as you noted, 20k is an amazing sample size. The cost and time for that - not to mention the convenience! - alone is amazing . . for an opinion poll. No doubt they’re proud, that’s a fine achievement for an opinion poll. Now: did they measure what they set out to measure? I doubt it, but since the methodology given is the single word “online”, I remain skeptical.
And saying “but there’s math in it!” is exactly why I’m skeptical. That effectively means nothing, and it’s used to validate whatever conclusions were presented. “We ran the numbers, and . . ” can mean very specific things, and in some contexts it is good enough to move on to the conclusions. Polls trade on that, but they don’t deserve to.