Close to one in two (49%) Australians want to ban Muslim immigration, if you can believe the polling. It’s from Essential, and many do not.

When the poll was released, Labor member Tim Watts asked what response you’d get if you asked other questions about migration, calling on politicians to “unpack this simplistic, political snake oil”.

On News Corp’s RendezView, an opinion piece cited results of a “much larger sample” on an American online polling website, which “paint a different picture”. Meanwhile, Mariam Veiszadeh, the president of the Islamophobia Register Australia, tried to run her own poll, which was quickly flooded with nearly 47,000 votes, most opposed to Muslim immigration.

[Now we know, racists may well represent the ‘silent majority’]

So what polls, if any, should you trust?

Crikey unpacks the methods behind professional polling.

How big a sample do you need? And is it representative?

It’s common for people to point to sample size as the key factor in assessing the legitimacy of a poll. But beyond a certain point, sample sizes don’t matter as much as demographic matching.

How many people do you need? Pollsters assume the population sits on a bell curve and poll an appropriate number of people to give them a small margin of error (those wanting an introduction to the mathematics of polling can read this paper). Most Australian political polls aim for at least 1000 respondents — a figure that has historically given pollsters samples big enough to confidently predict election outcomes. Halving this margin of error would require a four-fold increase in the sample, which, for most polls, is uneconomical. Smaller polls, however, suffer from larger margins of error.

Which Australians are polled is as important as making sure the samples are big enough.

Professional polling companies make sure their sample, as close as possible, matches the population characteristics of Australia. Pollsters try to maintain a representative mix of gender, age and location. This is achieved, often, with quotas: if a poll has too many women responding, it will stop polling women until an even gender split is achieved.

Even once the raw data is collected, the massaging of the sample isn’t done. While getting a representative sample is seen as the most important factor in an accurate poll, the data is then further weighted to give a good representation of the actual electorate. Answers from certain hard-to-poll demographics may be given a heavier weighting than others, to adjust for polling bias when it comes to the sample. Exactly how the weighting is done is largely a trade secret, carefully guarded by each pollster.

And it can lead to some startlingly different outcomes. Last month, The New York Times gave five different pollsters the same raw data on whom a group of people would vote for, which the pollsters adjusted to give a variety of results (ranging from Hillary Clinton being 4% ahead to Donald Trump being 1% ahead). Each pollster made slightly different assumptions about how best to weight the sample. In America, voting is not compulsory, so pollsters have to guess, based on past experience, how likely people of different ages, genders and races are to actually vote on polling day. It’s more art than science — there’s a lot of room for disagreement.

How the question is phrased

Everyone is familiar with push-polling, and polls that don’t reveal the question people were asked aren’t worth much. But even relatively factual question can predispose people to answering one way or another.

[What if our collective racism turned again to Asians?]

The basic rule of thumb, says Essential polling research director Andrew Bunn, is to give people no extra information. He gives the example of a poll that tells people what the Labor and Liberal positions on an issue are, before asking them what they think. This technique will split respondents along voting lines in a way they wouldn’t otherwise be.

Who paid for it?

For the major political pollsters, the political polling is a high-profile loss-leader to their real business, which is market research. This research often isn’t publicly released — it belongs to the companies that commissioned it to do with as they want.

When companies do release research, you can ask all the questions above of it. But it’s worth looking at what polling company did the research. Pollsters that are members of the Australian Market and Social Research Society, are supposed to adhere to the industry body’s codes on how to poll responsibly.

Polls, or surveys, are a classic way to generate media attention. Polls that don’t correct for demographics, even if they generate a very large sample size, are unlikely to be very statistically valid.

Is there anything you shouldn’t poll?

Bunn says the poll on Muslim immigration did prompt some discussion in the office. Once the first result was reached, the poll was then repeated months later, in order to avoid headlines that might be false. Further questions were also asked to dig deeper into the results, though these questions weren’t the focus of much of the media commentary. Respondents who did want to ban Muslim immigration were asked why that was the case.

“We gave them a list of options,” said Bunn. “The one that came up most clearly was the view that Muslim people don’t integrate into Australian society. It wasn’t terrorism, or security fears, that were behind that view, it was more around these people are different … We also put up a number of statements about Pauline Hanson and her views, and many agreed with them.”

Bunn has been at Essential for nine years, and in that time he does acknowledge that there have been some questions the body decided not to release publicly. It’s been a matter of sensitivities — some polls the company has done have had the potential to cause repercussions for people in difficult circumstances.

But generally, Bunn is an advocate of polling more issues rather than fewer. Australian professional polling has quite a good track record when it comes to gauging things like voting intention (perhaps unsurprisingly — it’s far easier to poll for predictive purposes when voting is compulsory). But Australian polling tends to be quite conservative — focused on a narrow range of issues like how people plan to vote and what they think of political leaders. There’s an opportunity to do far more, Bunn says.

“Polling should be helping us to understand and explain public opinion and contributing to the public discussion — why we favour one party over the other, what we want from the government, where we stand on current issues. Having a measure of public opinion on a range of different issues can provide the basis for a better informed public debate.”