Jesus Christ they got 20,000 people to answer poll questions in two weeks?
. . . What was the methodology? Was it, like, at a football match everybody who agrees . . raise your hand? Was it “like if you agree” on Telegram? Agggh. Okay I’ll go look.
This report is based on a public opinion poll of adult populations (aged 18 and over) conducted in May 2024 in 15 countries (Bulgaria, the Czech Republic, Estonia, France, Germany, Great Britain, Greece, Italy, Poland, Portugal, the Netherlands, Spain, Sweden, Switzerland, and Ukraine). The total number of respondents was 19,566.
The poll was conducted online by Datapraxis and YouGov in the Czech Republic (9-16 May, 1,071 respondents), France (9-20 May, 1,502), Germany (9-17 May, 2,026), Great Britain (9-13 May, 2,082), Greece (1-16 May, 1,093), Italy (9-17 May, 1,036), Poland (9-23 May, 1,550), Portugal (9-20 May, 1,070), the Netherlands (9-15 May, 1,014), Spain (9-17 May, 1,508), Sweden (9-23 May, 1,026), and Switzerland (2-15 May, 1,079). It was conducted online by Datapraxis and Alpha Research in Bulgaria (9-23 May, 1,000), and online by Datapraxis and Norstat in Estonia (6-21 May, 1,009). In all these countries the sample was nationally representative of basic demographics and past votes.
In Ukraine, the poll was conducted by Datapraxis and Rating Group (7-12 May, 1,500) via telephone interviews (CATI) with respondents selected using randomly generated telephone numbers. The data was then weighted to basic demographics. Fully accounting for the population changes due to the war is difficult, but adjustments have been made to account for the territory under Russian occupation. This, combined with the probability-based sampling approach, strengthens the level of representativeness of the survey and generally reflects the attitudes of Ukrainian public opinion in wartime conditions.
Some of the questions were not asked in Great Britain and Switzerland. The questionnaire in Ukraine included several questions that were not asked elsewhere. Overall, the graphs in this paper display data for all the countries in which the respective question was asked.
Which - says who did it, but except for “online” or in the case of Ukraine “random telephone numbers”, it doesn’t describe the Method of the . . y’know, the Ology. So. “online”. Like, click on an Ad at stormfront.com, or -?
Thanks for posting the table, much more informative than the article.
IDK if I have a browser problem? But that table was not visible to me reading the article.
Yeah, it’s just that polls are notorious for saying something grand, and then when you dig into it it’s always some bizarre, unbelievable process with a minuscule percentage of the population.
As we see, the article doesn’t say how they did it.
If we broadly take the population of Europe and Ukraine to be 788 million people total, this survey of 20,000 people would be 0.002% of the population.
I don’t think that’s statistically significant. By, well, a lot.
It is, population size has essentially no effect on statistical significance of a sample (other than amplifying it as you start to sample most of the population). 20,000 people is massive and will give you sub 1% confidence ranges. The difficulty is ensuring you have a representative sample (no one does) and correcting for the biases you do have in your sample.
Do you really think the huge polling industry is unaware of basic statistics and your dividing the sample size by the population would come as a revaltion to them?
So why did you say that 20,000 was far too small a sample for Europe if you accept the maths which shows that that size sample gives <1% margin of error for that sample size?
In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power.
You see how the elements listed there (cost, time, convenience, and ‘sufficient statistical power’) are more qualitative measurements and not known constants? (I mean, whenever it starts with, “In practice . . .” you know it means “in a perfect system devoid of unknowns”, or in other words “ideally but you’ll see it doesn’t work exactly like that” )
What is the sufficient statistical power for sampling Europe? 0.002%? Two thousandths of a single percent? That greenlights your findings? Okay. I disagree. Polling companies don’t disagree because in this case, as you noted, 20k is an amazing sample size. The cost and time for that - not to mention the convenience! - alone is amazing . . for an opinion poll. No doubt they’re proud, that’s a fine achievement for an opinion poll. Now: did they measure what they set out to measure? I doubt it, but since the methodology given is the single word “online”, I remain skeptical.
And saying “but there’s math in it!” is exactly why I’m skeptical. That effectively means nothing, and it’s used to validate whatever conclusions were presented. “We ran the numbers, and . . ” can mean very specific things, and in some contexts it is good enough to move on to the conclusions. Polls trade on that, but they don’t deserve to.
Jesus Christ they got 20,000 people to answer poll questions in two weeks?
. . . What was the methodology? Was it, like, at a football match everybody who agrees . . raise your hand? Was it “like if you agree” on Telegram? Agggh. Okay I’ll go look.
Okay here’s the Methodology section:
Which - says who did it, but except for “online” or in the case of Ukraine “random telephone numbers”, it doesn’t describe the Method of the . . y’know, the Ology. So. “online”. Like, click on an Ad at stormfront.com, or -?
Anyway here’s the numbers:
Thanks for posting the table, much more informative than the article.
IDK if I have a browser problem? But that table was not visible to me reading the article.
That was from the source pdf linked in the article, fwiw. It also didn’t have any methodology notes, it was just results.
There’s a lot of info and graphs, but it’s interesting:
Yeah, it’s just that polls are notorious for saying something grand, and then when you dig into it it’s always some bizarre, unbelievable process with a minuscule percentage of the population.
As we see, the article doesn’t say how they did it.
If we broadly take the population of Europe and Ukraine to be 788 million people total, this survey of 20,000 people would be 0.002% of the population.
I don’t think that’s statistically significant. By, well, a lot.
But it is an interesting headline.
It is, population size has essentially no effect on statistical significance of a sample (other than amplifying it as you start to sample most of the population). 20,000 people is massive and will give you sub 1% confidence ranges. The difficulty is ensuring you have a representative sample (no one does) and correcting for the biases you do have in your sample.
Do you really think the huge polling industry is unaware of basic statistics and your dividing the sample size by the population would come as a revaltion to them?
I think the huge polling industry is based on, and a provider of, multiple lies.
I think the average person, who will incorporate polling headlines into their worldview, is unaware of the enormous differences.
Do those lies include tricking professional mathematicians into thinking their lies are actually formally proved mathematics?
No. The math is sound. The premise is flawed.
So why did you say that 20,000 was far too small a sample for Europe if you accept the maths which shows that that size sample gives <1% margin of error for that sample size?
From your link:
You see how the elements listed there (cost, time, convenience, and ‘sufficient statistical power’) are more qualitative measurements and not known constants? (I mean, whenever it starts with, “In practice . . .” you know it means “in a perfect system devoid of unknowns”, or in other words “ideally but you’ll see it doesn’t work exactly like that” )
What is the sufficient statistical power for sampling Europe? 0.002%? Two thousandths of a single percent? That greenlights your findings? Okay. I disagree. Polling companies don’t disagree because in this case, as you noted, 20k is an amazing sample size. The cost and time for that - not to mention the convenience! - alone is amazing . . for an opinion poll. No doubt they’re proud, that’s a fine achievement for an opinion poll. Now: did they measure what they set out to measure? I doubt it, but since the methodology given is the single word “online”, I remain skeptical.
And saying “but there’s math in it!” is exactly why I’m skeptical. That effectively means nothing, and it’s used to validate whatever conclusions were presented. “We ran the numbers, and . . ” can mean very specific things, and in some contexts it is good enough to move on to the conclusions. Polls trade on that, but they don’t deserve to.