Like politics itself, polling can be as much art as science. One political strategist’s “bold plan” is another’s “rookie error”, and one pollster’s carefully worded question is another’s biased push poll. But while great artists might never agree on great art, honest scientists can usually agree on clear evidence. Usually.

Take the recent survey of almost 3000 voters in Coalition held seats conducted on behalf of the Australia Institute by ReachTEL. Although the poll found that a majority of Coalition voters in Coalition-held seats are opposed to the idea of cutting penalty rates, some would-be pollsters and political strategists have raised concerns with the survey method and concluded that the smart thing for the Coalition to do would be ignore the poll and press ahead with cuts to wages in an election year. Crazy brave or genius? Art or science?

While such debates have no doubt stopped more than one barbecue this summer, luckily there is a range of evidence we can draw on to help settle this one like scientists. Let’s start with the method.

On December 18, ReachTEL polled 2990 people and asked them whether they thought penalty rates should be raised, left the same, lowered or abolished. The survey found that between 65% and 79% of respondents and, in particular, 54% and 67% of Liberal or National voting respondents did not support cutting or abolishing penalty rates. Pretty clear? Not according to some.

After a Tasmanian blogger named Kevin Bonham raised concerns about the absence of a “don’t know” option, The Australian‘s Peter Brent dedicated a whole online column to critiquing the results. The absence of a “don’t know” option, Brent and Bonham told readers, shredded the significance of the polling evidence. But was their opinion art or science?

In order to pump up the scientific nature of his concerns, Brent stepped readers through a “thought experiment”. Imagine, he urged readers, “if we assume that half of thestay the sames actually were ‘don’t knows’, then excluding them from the count produces majority support from Coalition supporters for either lowering or abolishing penalty rates in three of the four seats’’.

While Brent’s maths is correct, his science definitely is not. Brent and Bonham both start from the flawed assumption that the number of potential don’t knows is unknown. Indeed Bonham states confidently that “most ‘stay the sames would actually have no view” and that “a ‘don’t know’ option would certainly have changed the numbers considerably’’.

While artists imagine, scientists measure, which is what the Australia Institute did in June 2015 when we first asked voters a similar, but not identical, question about penalty rates. What we found was that only 8% people chose “don’t know” when asked about cutting penalty rates. That is a fair bit less than the 50% imagined by Brent.

While criticism often has no impact on artistic integrity, new data has a big impact on scientists seeking to discover the facts. The fact that the proportion of voters who chose “don’t know” in response to a question about cutting penalty rates is so much lower than Brent’s best guess suggests that the issue is of “high salience” to voters. Salience is a fancy pollster word for an issue people care about.

But how could Bonham or Brent have known about the additional polling results? While artists, and bloggers, often like to beaver away in their basement, scientists, and indeed journalists, prefer to check with the source. I’d have happily shared if they asked. But not only did neither critic call me, it seems they didn’t call each other. While Bonham praised the Australia Institute’s willingness to release the questions and answers we collect, Brent has previously suggested we were unwilling to do exactly that.

Moving on from polling to political strategy, we can observe similar misplaced confidence when Brent states “the fact of majority voter opposition to a proposal doesn’t necessarily mean much politically”. While I don’t have another opinion poll to kill that conclusion off, I would suggest that John Howard once bet his party’s future on exactly that opinion. He lost.

Brent went on to add:

“Not every issue changes people’s votes. As a rule, and contrary to popular political class conviction, policies in isolation don’t much affect election results, in part because they aren’t necessarily believed. The ‘vibe’, what the party is ‘about’, is much more important, and that of course is influenced by policy announcements.”

On that we agree.

It is the job of politicians to bet which issues will be “salient” on election day and which will be nothing more than the fish-and-chip wrappers of tomorrow. Brent’s “best guess” about the salience of the penalty rate issue was six times higher than the result of our June poll. Maybe his gut instinct that voters don’t care about the wages they, their children and their grandchildren earn is correct. Maybe the right’s views that voters obsess about tax rises but don’t care about wage cuts is spot on. John Howard was willing to bet on that feeling, and no doubt Bill Shorten is hoping that Malcolm Turnbull will as well.

No one has ever suggested that any one opinion poll can divine the objective truth of voters’ opinions and intentions. But when multiple well-designed polls run by a diverse range of reputable companies such as the ones used by the Australia Institute suggest an issue is a turn-off for voters, it is a brave political scientist who simply prefers to trust their gut. Like a flashing red light on your dashboard, opinion polling can be a false alarm, but most drivers stop and check their oil nonetheless.

The Australia Institute is a fiercely independent think tank. Our research is independent of any political party. We thought hard about which questions to ask and thought carefully about how to present them. Of course, anyone is free to analyse any polling outcomes as they see fit — indeed we encourage it — but those that seek to undermine the validity of these results via name-calling only do themselves a disservice.