Skip to content

My one-woman crusade: Why 1,000 is enough for a public opinion poll

December 3, 2019

I tell you what the world has changed a lot since I started working in research.  My industry has been pootling along for decades asking people stuff and helping clients to make evidence-based decisions but suddenly we’ve had a couple of close-call referendums and some questionable national leadership and now everyone is interested in public opinion.

Coolio.  Gauging public opinion is what we do.  So we do that.

And when we do…. don’t read the comments.

I read the comments.

I should stop reading the comments.

I stuck ‘opinion poll’ into Facebook and here’s some comments from the first article that came up – a random Scotsman article about a random opinion poll:

“1000 people poll does not represent the whole of Scotland.”

“1000 people does not speak for whole of Scotland.”

“1000 people is not indicative.”

“So, we have a poll of a statistically insignificant number of Scots.”

“Only a thousand people asked so not a very reliable survey.”

“Amazing how many of these poles are carried out without most ordinary people ever being asked.”

“Who did these people poll?  Nobody who I know.”

“I would like to ask just how many took part in the poll as I live in Scotland and was not asked or knew anything about it.”

“1000 polled in Dundee.”

“1000 mutant cult members voted.”

“Probably did the poll with jakey brew boys.”

Seriously, this always sets me off on my one-woman crusade to educate the world about why a survey of 1,000 people is enough for a public opinion poll.

Anyhoo, so I have written a bit before about sampling and quotas in which I say:

Research is all about asking the right people the right questions, and the process of selecting the right people to participate in a research project is called sampling.  The idea behind sampling is that you don’t need to talk to a whole population to find out what that whole population thinks.  Which is handy, because speaking to ‘all women’ or ‘all coca cola customers’ (etc etc) would be very expensive / time consuming / impractical.

Someone wise once explained sampling to me using a soup-based analogy.  If you get a bowl of soup, add salt, and stir it up, you do not need to drink the whole bowl of soup to know whether it tastes right.  You can tell whether it is correctly seasoned by tasting a spoonful because all of the spoonfuls taste the same.

So, I’m saying that if you talk to a subset of a population you will more than likely get the same answers as if you asked everyone, as long as you are sure that the characteristics of the subset match the characteristics of the whole population.

This is the principle behind the 1,000 thing – if you interview the right mix of 1,000 people it is just about as good as interviewing everyone.

I have managed a population survey and I have seen this work in practice.  I’ve done surveys where we would ask the same question to a different thousand people every month and we’d get very similar results each time, and I’ve asked questions to estimate how many people would do a thing (say, get the flu jab) and it has later been proved to be correct.

What’s “correct” though?  Well, this is where margin of error comes in.  This is a statistical calculation of how confident you should be in the results and how close they are likely to be to the ‘true’ figure.  For the UK population, a poll of 1,000 adults has a margin of error of 3%.  As YouGov helpfully explains, “This means that 19 times out of 20, the figures in the opinion poll will be within 3% of the ‘true’ answer you’d get if you interviewed the entire population.”

So if 72% of the sample say “yes” to a question, the “true” answer is somewhere between 69% and 75%.

In most circumstances that is close enough, it gives a decent steer to make a decision on and there’s enough people in the pot to chop it up by age or gender or region and still make statistical sense out of it.  If you wanted to know whether re-branding your biscuits was a good idea, or if people were planning to take a holiday over the summer, that’d be fine.

What if you feel you need a more precise answer though?  Well that’s where you need to be practical because it is going to cost you a shed-load more time and money to achieve that.

Lets say you want to survey the UK adult population with a margin of error of 1%.  You would need a representative sample of 10,000+ adults.

That’s A LOT more people, for a marginal improvement.

And how much do you think it costs to run a survey of 10,000 adults?  I’d say £millions?

Not worth it.

But a thousand is do-able, and for efficiency the polling world is set up with that in mind.  You can buy space on an independent representative opinion survey of 1,000 people (an omnibus survey) for a few hundred quid per question.  Job done, reasonable price, easy.  You see reporting on a lot of 1k polls because of this.

That’s why a nice, round, affordable 1,000 is optimal as it is the sweet spot between robust and practical as a way to gauge public opinion.






No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: