Public opinion poll

From Citizendium
Revision as of 11:37, 17 May 2009 by imported>Shamira Gelbman (→‎History of opinion polls: moved stuff to talk page)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Catalogs [?]
 
This editable Main Article is under development and subject to a disclaimer.

A public opinion poll is a questionnaire used to measure public opinion, or the collective attitudes held by a population. Because of the impracticality of administering the questionnaire to all of a large population's members, public opinion polls assess the opinions of the total population by surveying a sample of size N, where N is sufficiently large and representative to produce statistically valid results.

Polling is by far the predominant means for measuring public opinion in this day and age and poll administration practices have grown increasingly sophisticated and rigorous since the 1930s inception of the enterprise. Nevertheless, it remains an imperfect instrument, the accuracy of which is frequently compromised by a variety of factors over which even the most diligent pollsters exert limited control.

History of opinion polls

While straw polling, which estimates public opinion based on informal sampling and surveying procedures, dates back at least to the early nineteenth century, the emergence of scientific public opinion polling is a much more recent development.

In 1916, the large-circulation U.S. magazine Literary Digest embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as president. Mailing out millions of postcards and simply counting the returns, the Digest correctly called the following four presidential elections.

In 1936, however, the Digest came unstuck. Its 2.3 million "voters" constituted a huge sample; however they were generally more affluent Americans who tended to have Republican sympathies. The Literary Digest saw the bias but did not know how to correct it. The week before election day, it reported that Alf Landon was far ahead of Franklin D. Roosevelt. At the same time, George Gallup conducted a far smaller, but more scientifically-based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest went out of business soon afterwards, while the polling industry started to take off .

Gallup launched a subsidiary in Britain, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party, led by Winston Churchill, to win easily.

By the 1950s, polling had spread to most democracies. Nowadays they reach virtually every country, although in more autocratic societies they tend to avoid sensitive political topics. In Iraq, surveys conducted soon after the 2003 war helped measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions and the presence of US forces.

For many years, opinion polls were conducted mainly face-to-face, either in the street or in people's homes. This method remains widely used, but in some countries it has been overtaken by telephone polls, which can be conducted faster and more cheaply. Because of the common practice of telemarketers to sell products under the guise of a telephone survey and due to the proliferation of residential call screening devices and use of cell phones, response rates for phone surveys have been plummeting. Mailed surveys have become the data collection method of choice among local governments that conduct a citizen survey to track service quality and manage resource allocation. In recent years, Internet and short message service (SMS, or text) surveys have become increasingly popular, but most of these draw on whomever wishes to participate rather than a scientific sample of the population, and are therefore not generally considered accurate.

Polling procedures

Design

Administration

Data analysis

Sources of inaccuracy

Sampling error and bias

All polls administered to population samples are subject to sampling error, which refers to the extent to which the opinions expressed by the surveyed sample do not reflect the opinions of the population as a whole. Sampling error is typically expressed as a confidence interval of plus or minus some number of percentage points associated with a statistical confidence level. For example, the maximum sampling error (MSE) for a sample of 1050 drawn from a population of 1,000,000 is +/-3 percentage points at the 95% confidence level; this means that there is a 95 percent chance that the results of a survey administered to that sample fall within a 6-point range around the true opinion of the population as a whole.

Pollsters can reduce sampling error by administering a poll to a larger sample. For example, a sample of 10,000 drawn from the 1,000,000-member population would yield an MSE of +/-1% at the 95% confidence level, and a sample of 100,000 would reduce the MSE to just +/-0.3%. In practice, however, increasing a sample size enough to reduce sampling error substantially usually entails undue financial and logistical costs.

Sampling error does not reflect other sampling-related sources of inaccuracy, including sampling bias, which comes about when a poll is administered to a sample, however large, that is not representative of the population as a whole. A form of selection bias, sampling bias can be the result of a variety of factors, including convenience sampling, the use of an inappropriate sampling frame, and non-response bias.

Convenience sampling

Convenience sampling is the practice of administering a poll to individuals who are easiest to recruit regardless of their representativeness of the population whose opinions are intended to be measured.

Sampling frame bias

A sampling frame is a defined set of individuals within a population from which a sample is to be drawn. It may but does not necessarily consist of a literal list of all of the population's members. In fact, exhaustive population lists often do not exist or cannot be readily obtained by pollsters. When this is the case, pollsters use some sort of proxy frame, which may consist of a literal list, such as a directory of listed telephone numbers, or a figurative one, as in the case of random digit dialing, which samples from a hypothetical "list" of all possible telephone number permutations. Coverage error is the discrepancy between such non-exhaustive sampling frames and the full population. To the extent that a sampling frame's non-coverage of the population systematically excludes some segments of the population, poll results will suffer from coverage bias. For example, random digit dialing excludes those population members who do not have telephones, a group that is not evenly distributed within the population since it is most likely comprised of individuals at the lower end of the socioeconomic spectrum. A telephone directory sampling frame yeilds still more coverage error and bias, since it excludes not only those population members who do not have telephones, but also those who do but have unlisted numbers. This second excluded group is also not likely to be evenly distributed within the population; for example, the burgeoning "cell-phone only" sector, whose phone numbers are unlisted by default, draws disproportionatly from the younger segments of the population.

While coverage bias is typically associated with undercoverage, or the exclusion of one or more segments of the population, it is also possible for a sample to suffer from overcoverage -- that is, the inclusion of individuals who do not strictly belong in the population of interest. For example, pre-election polls in the United States sometimes use a sampling frame that includes all adult Americans regardless of whether they're registered or likely to vote. To the extent that the opinions of non-voters, who are disproportionately young, less-educated and non-affluent, differ systematically from voters', their inclusion skews the results and makes it difficult to use the for election forecasting and campaign strategy purposes. To avoid this problem, many polling organizations limit their pre-election poll samples to either registered voters or, increasingly, to "likely voters," whom they typically identify with a battery of questions at the start of the poll about past voting behavior and levels of political interest.[1]

Non-response bias

Whereas coverage bias stems from pollsters' sampling frame choices, non-response bias is caused by respondents' decision whether or not to participate in polls. Since some people do not answer calls from strangers or refuse to respond to polls, samples may lack population representativeness despite pollsters' best efforts to construct them appropriately. As with those excluded from participation due to the use of inappropriate sampling frames, the characteristics of the people who agree to be interviewed may be systematically different from those who decline. To the extent that this is the case, non-response bias ensues and contributes to inaccurate polling results.

Nonattitudes and insincere opinions

Also known as "pseudo-opinions," nonattitudes refer to the propensity for respondents to express an opinion despite not actually having one. First identified by political scientist Philip Converse in 1964,[2] the problem of nonattitudes is a constant source of vexation for public opinion researchers.

A related source of inaccuracy in public opinion polling is respondent insincerity, or the expression of opinions that are not sincerely held. Often, this takes the form of social desirability response bias (SDRB), which refers to respondents' tendency to provide answers that, true or not, present them in the most socially acceptable light.

SDRB is frequently cited as a factor in the Bradley effect (also known as the Wilder effect or Bradley-Wilder effect) that is sometimes evident in elections featuring a black candidate running against a white opponent.

Question effects

Another potential source of inaccuracy in public opinion polling is the content of the questionnaire itself. Specifically, the wording of questions, the order in which they are asked, and the response alternatives that are made available to respondents can all influence the results of public opinion polls.

Question wording

Question order

Response alternatives

Thus, comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. [1][2][3] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.[4] One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.

The most effective controls, used by attitude researchers, are:

  • asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
  • analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.

These controls are not widely used in the polling industry.

Mode of interview and interviewer effects

Poll results might also be skewed by the method used to administer the poll.

Polls that are not self-administered -- that is, those in which responses are recorded by an interviewer rather than the respondent himself -- are also subject to interviewer effects, or inaccurate results due to the tendency for respondents to tailor their responses based on their perception of the interviewer's race, gender, or age.


Bad polling examples

An oft-quoted example of opinion polls succumbing to errors was the British election of 1992. Despite the polling organisations using different methodologies virtually all the polls in the lead up to the vote (and exit polls taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.

In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:

  • Late swing. The Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
  • Nonresponse bias. Conservative voters were less likely to participate in the survey than in the past and were thus underrepresented.
  • The spiral of silence. The Conservatives had suffered a sustained period of unpopularity as a result of economic recession and a series of minor scandals. Some Conservative supporters felt under pressure to give a more popular answer.

The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organisations have adjusted their methodologies and have achieved more accurate predictions in subsequent elections.


The influence of opinion polls

By providing information about voting intentions, opinion polls can sometimes influence the behaviour of electors. The various theories about how this happens can be split up into two groups: bandwagon/underdog effects, and strategic ('tactical') voting.

A Bandwagon effect occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884; Safire (1993: 43) reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroberation until the late 20th century. George Gallup spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980's onward the Bandwagon effect is found more often by researchers (Irwin & van Holsteyn 2000).

The opposite of the bandwagon effect is the Underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be 'losing' the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the Bandwagon effect (Irwin & van Holsteyn 2000).

The second category of theories on how polls directly affect voting is called strategic or tactical voting. This theory is based on the idea that voters view the act of voting as a means of selecting a government. Thus they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, less-preferred, candidate from strategic considerations. An example can be found in the general election of 1997. Then Cabinet Minister, Michael Portillo's constituency of Enfield was believed to be a safe seat but opinion polls showed the Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the Boomerang effect where the likely supporters of the candidate shown to be winning feel that s/he is "home and dry" and that their vote is not required, thus allowing another candidate to win.

These effects only indicate how opinion polls directly affect political choices of the electorate. Other effect can be found on journalists, politicians, political parties, civil servants etc. in, among other things, the form of media framing and party ideology shifts.



References

  1. See, e.g., Frank Newport, "Who Are Likely Voters and Why Do They Matter?" Gallup Organization, July 28, 2008 (accessed May 15, 2009).
  2. Philip E. Converse, "The Nature of Belief Systems in Mass Publics," in Ideology and Discontent, David E. Apter, ed. (New York: Free Press, 1964) pp. 206-61; see also Converse, "Attitudes and Non-Attitudes: Continuation of a Dialogue," in The Quantitative Analysis of Social Problems, Edward R. Tufte, ed. (Reading, MA: Addison-Wesley, 1970) pp. 168-89.