No survey is free of biases. The most trust worthy polls are those that try to diminish the effect of biases. This is one way in which people, especially those with an agenda, can deceive us with statistics—they conceal, or better yet, purposefully create their polls and surveys with predetermined biases.
Instead of making this post longer than it should be, check out one of my previous posts on statistics if you are unfamiliar with inferential statistics, which is what I’ll be referring to in this post.
These are some biases that can affect the data in polls and surveys.
The major bias comes from participant selection. Suppose a consumer researcher wanted to know if the citizens of a particular city, say, Metropolis, preferred to eat at McDonalds over Burger King. The researcher, wanting to save time and do things effortlessly, decides to wait outside McDonalds to gather participants for his survey. At the end of the day his results showed that 90% of individuals preferred McDonalds over Burger King. The opposite results would have been obtained if the researcher decided to camp and obtain results outside a Burger King. This does not necessarily mean that most of the city’s citizens prefer one fast-food restaurant over the other, simply that the researcher did not choose his participants wisely. This is the reason why random sampling is crucial when carrying out polls or surveys, or any other inferential study.
The general idea applies to popular media. A liberal news network (let’s call it Liberal-8 News) setup a poll on their website. The results they obtain lean to a more liberal view. This could mean that (1) more people in the general public hold liberal views or (2) that most of Liberal-8 News viewers are liberals. The same goes for magazine polls. A hypothetical Christian magazine specifically talks about the power of prayer and regularly publishes anecdotes of the wonders of prayer. If this magazine makes a poll asking the following question: “Does God answer prayers to heal the faithful?” Chances are nearly 96% or 97% of respondents will answer yes (the remaining percentage accounting for pesky atheists who by chance came upon the poll). The opposite goes for magazines where skepticism is embraced, where the same question would generate a no in the 90%’s. Neither response can infer whether the general public believes prayers are answered or not, merely the opinions of their own magazine subscribers. Again, this is the reason why researchers try (or at least should try) to randomize participants.
A bias influenced by the interviewer is not uncommon. Imagine that your school project requires you to go out and ask random persons on the streets if they buy more things than they actually need. If you are a woman you may feel more inclined to approach other woman. The opposite may occur if you are male. A survey that only includes participants of one sex cannot be representative of the general population. Similarly, if you decide to talk only to persons who seem of a higher class because hobos or persons with a raggedy look frighten you, this can account for a bias on socioeconomic status. Something very much alike may occur if the interviewer is racist or shy when approaching persons of a different race or ethnic group.
If this wasn’t enough, another bias can arise as an influence of the interviewee or participant. As much as the researcher may express that the poll or survey is anonymous, the participants may still lie to make themselves look good. Earlier this week I was completing a survey for my Human Sexuality course. I worked on the survey at the campus library and, because other people were situated close by (there was a girl sitting to me who occasionally turned to look at my screen), I felt a bit uncomfortable filling out some of the questions that asked things like, “How often do you think of sex?” or, “Are you satisfied with your current sex life?” Likewise, other people working on surveys that ask personal questions about their sex lives, drug use, abuse in family, or topics of the like, may lie to make themselves look good. This obviously contaminates data and render the results useless.
These are some biases mentioned in the first chapter of Darrel Huff’s book How to Lie With Statistics, and a couple of things mentioned by former professors in my Basic Statistics for Behavioral Sciences and Research Methods for B.S. courses. Other biases can arise from different factors, but I think I’ve make my point here: Don’t believe everything you hear in polls and surveys without first questioning how the researcher reduced biases and considering how the biases involved may have influenced the results.
[This is part of an ongoing series entitled Bad Statistics. Check out the next post: Bad Statistics 2: Shady Averages]