Bad public polls are like toupees. Some you can spot right away. Others you have to squint to catch a glint from a poorly placed dollop of glue. Unfortunately, to many in the media, each new poll is a thick, rich head of real hair, and they report it that way.

Why is this a problem? In the presidential race public polls shape the conventional wisdom and drive media attention to the perceived frontrunners. The frontrunners benefit from “free media” and garner more resources to spend on advertising and field operations. Survey research has taken on even greater significance this year, with polls determining who participates in the major debates, such as the Republican debate in Nevada on Tuesday (CNN).

When confronted with yet another poll that suggests the political world has gone sideways again, ask yourself five simple questions.

Look at any new public poll with your bullshit detector turned up to 11.

Any group that gives away polling for free has an agenda. Maybe it is to increase click-through rates (“80 percent of Americans want pizza for dinner. What happens next will surprise you…”), to make people reconsider their position (“75 percent of Americans prefer pepperoni. Get with the times, cheese-lovers.”), or to help its side win an election (“Thin crust is soundly beating thick crust, 60 percent to 40 percent. Make your checks out to ‘Thin Crust for President.’”)

And pay attention to polls that are released by organizations you disagree with or whose results run contrary to your worldview. Groups with an opinion can release good research, and there are organizations out there dedicated to providing an honest assessment of American opinion.

Know who sponsored the poll.

A common mistake in the reporting of polls is not understanding whom the survey is studying: all adults, registered voters or some definition of so-called “likely” voters? Each group is an increasingly smaller subset of the one previous, and significantly so.

On the surface, these difference may not be apparent, but they make comparisons between the results of polls challenging.

Furthermore, pollsters often make demographic, geographic and partisanship assumptions about the population they are studying. These assumptions can have a profound impact on the results. If one poll has more Republicans in its “sample” while a different poll the next week has more Democrats, do we trust the conclusion that the Democrat is gaining on the ballot? More likely, the apparent movement is a side effect of the different assumptions pollsters brought to their work.

Discover who did the interviews and who was interviewed.

While the majority of political polling is still done over the phone using live interviewers, there are plenty of automated phone polls, online, mobile, even the occasional mail, or, rare, door-to-door poll.

Some pollsters still use a variation of the RDD (random digit dial) approach, while others have gone totally to lists. The latter are meticulously maintained and appended with contact and consumer information. As good as these lists are, they are far from perfect.

Understanding how the poll reached respondents can help you understand results and conclusions. For instance, online polls and automated polls have consistently shown He-Whose-Orange-Hair-Shall-Not-Be-Named higher than in polls using live interviewers.

Find out how and when the surveys were conducted.

When looking at how the questions in a poll were written, there are three critical factors to consider.

The first is the question wording itself. On well-known issues, question wording may have less of an impact. However, on issues just being introduced to people wording can make all the difference. Also, the issues raised and questions asked early in the survey can alter a respondent’s perspective on a key question asked later on.

As one moves away from actual question wording, to the press release, to the reporter, to the headline writer, it can be like the child’s game of “telephone.” Does what is being reported match what the question truly asked?

Demand to see the entire questionnaire.

We’ll end on the question of interpretation and analysis, which often starts with “margin of error.” Explaining how to calculate margin of error at the 95 percent confidence level would require you to do math and me to find the square root symbol on my keyboard. Neither of us wants that.

Instead, let’s consider a hypothetical. A poll in your state shows an incumbent leading a challenger 45 percent to 35 percent. What is the correct analysis?

1. “Incumbent leads challenger by a clear ten points. Well on the path to victory.”
2. “Well-known incumbent fails to reach 50 percent support vs. upstart. Highly vulnerable to defeat.”
3. “Twenty percent of voters are undecided. They will determine who wins.”

The error in a survey is for each number reported. If the margin of error is +/- 6%, the above poll is telling us the incumbent is likely between 51% and 39%, and the challenger between 41% and 29%. Those ranges overlap, so we’re not entirely confident that advantage is real. Could be greater; could be non-existent.

Therefore, all three of those possible analyses can be a reasonable interpretation of the poll, depending on everything else seen in the survey and known about the overall dynamics of the race.

When drawing conclusions from a public poll, I leave you with three cautions:

· Do not allow one poll to fundamentally change your thinking, and do not treat small “within the margin” movement of the numbers with the media’s breathless excitement.

· Do not accept the explanation for the results of a poll simply because it fits into the media’s existing narrative of what is happening in the race.

· Do not expect a poll to predict the future, which is unfortunately what many expect. Polls ask voters what they are thinking in the moment, and opinions evolve.

The polling industry is changing rapidly, and there are a lot of smart people searching for ways to build the next Death Star. But as Lord Vader warned, we must not be too proud of this technological terror we’ve constructed. Polling should exist to enlighten the public, not control it. If there are enough informed people out there, we can help keep it that way.

B.J. Martino is senior vice president at the Tarrance Group. Twitter: @bjmartino.