Predicting and influencing behavior with surveys

Placeholder Show Content

Abstract/Contents

Abstract
Predicting human behavior and understanding how different factors may influence future behaviors are central research themes across the social sciences. Many of the factors that are thought to predict behavior are measured using surveys and a large body of research has been devoted to developing better methods of generating predictions using survey data. Surveys are one of the most important modes of communication between many populations of people and the decision-makers that need data to inform their decisions. The federal government commonly conducts large national surveys aimed at measuring and predicting important social phenomena such as demographic change, economic conditions, health outcomes and political attitudes and opinions to inform decisions about government programs. Similarly, journalists frequently use the results of public opinion surveys speculate about future events and as a key source of inform their reporting about a range of political topics from the campaign horse-race to policy support among the public. Businesses similarly rely heavily on surveys to enable their consumers and employees to communicate their preferences and to predict future behaviors from purchasing to employee turnover. And political campaigns rely on polls to both predict and influence the behavior of voters. It is important to understand how psychological processes involved in the generation and consumption of survey and polling data may be used to both predict and influence behavior. There may be opportunities to take advantage of the fact that the behavior of survey respondents can be influenced by the answers that they provide in surveys. Using survey measurement to influence behavior is not a new concept, but understanding different ways that this idea can be effectively leveraged may prove valuable. Predicting behavior with political surveys is particularly important because these measures have the potential to play a powerful role in the functioning of democratic society by providing the mass public a collective communication mode for their attitudes, preferences, and opinions to political elites. Pre-election polls are often used to assess the preferences of the subset of respondents that are thought to be most likely to vote, leading polling organizations to generate and adopt methods designed to identify this subset of respondents. Prior research has suggested that these pre-election poll results may influence important election outcomes such as turnout, candidate preferences, assessments of candidate viability, and campaign fund-raising. Consequently, it is important to ensure that these political surveys are conducted and analyzed in ways that maximize their accuracy. Furthermore, the results of pre-election polls are some of the most frequently covered topics of political news during election campaigns, and these poll reports may influence the preferences and behaviors of people exposed to this information in a wide variety of ways. Understanding the effects of exposure to pre-election polling information may also be useful for understanding critical political outcomes such as candidate preferences and voter turnout. Over the course of three chapters, this dissertation will explore issues in survey and polling methodology and the implications of citizen exposure to polling information. The first chapter will examine the psychology of commitment in the context of attempts to improve survey data quality. The second chapter reports the results of a field experiment aimed at exploring the potential effects of exposure to pre-election polling information. The third chapter provides evidence from the American National Election Studies, Current Population Survey Voting Supplement and Exit Polls for potential improvements to pre-election polling methodology that have implications for how pre-election polls and horse race estimates are produced and analyzed. Leveraging commitment to influence behavior In many communication contexts, behavior modification is a desired outcome. At the individual-level, behavior change is often the goal of persuasive messages, such as when a doctor attempts to get a patient to stop smoking. Similarly, attempts at behavior modification are common in the domain of mass communication. Advertisements, such as those used by politicians to mobilize their supporters or demobilize their opponents' supporters or those used by firms attempting to get consumers to purchase a product or service, are frequently aimed at altering recipient behavior. Prior research suggests that even minimal exposure or communication may result in attitude change (Zajonc 1968) and behavior change (Sherman 1980). The link between answering questions about a target behavior and subsequent performance of that behavior by survey respondents has been one important and reliable effect discovered in the psychology literature. While most work on this question-behavior link has focused on behaviors that will take place farther into the future such as voting, purchasing, or engaging in physical activity, earlier work in survey methodology by Charles Cannell and his colleagues indicated that asking respondents to commit to providing accurate responses at the beginning of an interview can indeed produce higher quality data in that survey. However, the method developed by Cannell and his colleagues has not been widely adopted nor has it been systematically tested in self-administered survey modes such as mail or web-based surveys. In these contexts, there is reason to believe that the results found by Cannell may not replicate. For example, commitments made verbally to an interviewer may prove to be much more effective than those made in a web or mail survey. Additional research is needed to address these gaps in the extant literature. Using a series of experiments, this project will evaluate the effects of respondent commitments to provide high-quality data on a variety of indicators of survey data quality and on the strength of experimental treatment effects for substantive research questions. This project will leverage data from surveys on a variety of commercial and political topics that utilize samples from opt-in web panels, Amazon Mechanical Turk, and possibly student samples. While these data sources cannot be taken to be representative of the general population, they do provide a diverse set of individuals on which to assess the experimental treatment effects of commitment. Pre-election polls and voting During the months before elections, news coverage often focuses on the results of pre- election polls, and the aggregation of polling results on websites further increases their availability to the public. Recent changes in the modes and magnitudes of pre-election polling data collection create the potential for much greater geographic specificity in the representation of the status of the horse race. This article explores whether exposure to such polling information might have effects on candidate preferences and turnout. These hypotheses were tested in a field experiment, conducted during the 2012 general election, in which participants were randomly assigned to one of four treatment groups. Three of these treatment groups completed the pre- and post-election surveys and were also emailed three times in the two-week period before Election Day with results of hypothetical local pre- election polls consistently indicating either that Barack Obama was leading Mitt Romney, that Mr. Romney was leading, or that the race was essentially tied. A control group completed pre-election and post-election surveys only and received no e-mails about the election. During the post-election surveys, the three treatment groups all reported having voted at a significantly higher rate than did the control group. This experimental evidence suggests that disseminating pre-election poll results widely and often may increase citizen participation in politics via voting. Importantly, there was no systematic change in the candidate preferences of those exposed to the experimental polling information, indicating that there was no ``bandwagon effect'' toward the candidate reported to be leading in the polls. These findings provide important evidence concerning the potential effects of exposure to pre-election polling information in the period immediately prior to elections, contrary to some prior research, the effects of this information seem to be normatively positive since turnout is increased and there are no changes in candidate preferences. Identifying likely voters Researchers doing pre-election polling have traditionally believed that a random sample of a population includes both voters and non-voters, so predicting the outcome of an election requires ignoring the responses of people who will not vote. The most prominent method for doing so was developed by Paul Perry of the Gallup Organization in the 1970s. This paper reports one of few published evaluations of the effectiveness of this method and comparison with others. Data from the 2008 American National Election Studies (ANES) Time Series Survey (collected via face-to-face interviewing from a probability sample) were used to compare the accuracy of various approaches using three criteria for assessing accuracy: (1) post-election reports of turnout (compared to the government statistic), (2) post- election reports of candidate vote share (compared to the government statistics), and (3) the demographics of voters (compared with results from exit polls). A simple self-report of turnout intentions was a surprisingly successful measure, and the popular Perry method performed substantially worse than other methods. These results may have a clear and useful implication: trust respondents' self-reports.

Description

Type of resource text
Form electronic; electronic resource; remote
Extent 1 online resource.
Publication date 2017
Issuance monographic
Language English

Creators/Contributors

Associated with Vannette, David L
Associated with Stanford University, Department of Communication.
Primary advisor Krosnick, Jon A
Thesis advisor Krosnick, Jon A
Thesis advisor Grimmer, Justin
Thesis advisor Hamilton, James, 1961-
Thesis advisor Iyengar, Shanto
Advisor Grimmer, Justin
Advisor Hamilton, James, 1961-
Advisor Iyengar, Shanto

Subjects

Genre Theses

Bibliographic information

Statement of responsibility David L. Vannette.
Note Submitted to the Department of Communication.
Thesis Thesis (Ph.D.)--Stanford University, 2017.
Location electronic resource

Access conditions

Copyright
© 2017 by David Lee Vannette
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...