The issue
Asher discusses different types of polling techniques, how they are conducted and how data and results can be skewed based on factors such as question order, wording, length of the poll and sample error.
Major strength
Asher does a good job at describing the different types of polls that exist, as well as why people respond the way they do. For example, I never thought about how much question order can affect responders’ answers. He gives the example of when asking people where the “state of the country” was headed, Bush got a lower approval rating than when asking about his specific performance first. He also makes a valid argument that respondents will often resist participation in a long survey or poll and could resent it if they were misled about its actual length. From a personal perspective, Morgan and I felt this way when we filled out our Census form last week (we are roommates). The questionnaire was longer than expected and we found ourselves making up some answers in a rush to get it done or to bypass questions we didn’t know the answer to…oops?
Another interesting argument Asher makes is that the visual design of self-administered questionnaires can affect responses. This might be why I failed to fill out an entire section of the “quiz” Professor Gaither gave us the first day of class; I didn’t notice the third column of answers. However, that may be due to user error (stupidity) and not a poorly-designed questionnaire. 🙂
Major weakness
I wonder if Asher is a bit too assuming. In his explanation of the Gallup poll on the legality of homosexual relations, he discusses the “possibility” of why the answers turned out the way they did. He tends to do this in most of his discussions about poll results. He’s not necessarily wrong, it just makes me wonder if he assumes too much.
Something else I was wondering about was the idea of cluster sampling. While Asher makes a point that cluster sampling allows for sampling within geographic regions based on county/city/township etc., I feel like he does not take into account that demographics of each household could vary. Sure, these households could all be in the same county or town, but who’s to say the ethnicity or income level of each is the same?
Finally, Asher describes how often telephone interviews can be the better alternative to door-to-door interviews. This contradicts what we saw in The Persuaders, when the campaign team goes to individuals’ doors to ask them about their voting choice. The documentary portrays this practice to be appealing to respondents (although I think I would have to agree with Asher that this is not usually the case).
Underlying assumption
I don’t want to go so far as to say Asher is insinuating that we can’t trust polls due to error and user perception, but I think he has something to say about not taking them at face value. While different types of polling practices allow for less space for error, such as snowball sampling, each one may not be free from error or bias. I think he wants to note though this is the reason researchers perform all these different types of polling, because they want to obtain the most factual information they can.
Provocative questions:
As technology advances, what are the implications for online polling? Asher doesn’t speak about this; where could the sampling error and biased be brought in in this arena?
Like I talked about in my example of the Census, how often do respondents just plainly reply blindly or incorrectly? And how much does this affect what we believe to be true?