A Blog by Jonathan Low

 

May 23, 2012

Is Anybody Out There? Only 1 in 10 Respond to Survey Calls

We rely on public opinions polls and surveys to help us figure out where society is going - and whether we are in step with it.

Individuals, businesses and politicians all use it reflexively. The public pays attention. Polling results routinely generate headlines.

But there's a problem.

Many of these surveys rely on phone calls placed to people at home. The issue is that increasing numbers of people live in homes without phones. They are not off the grid, but they have given up their landlines, ie, their traditional phones, for mobile-only. And not only do fewer than 10% of mobile owners answer calls from people they don't know, but that is after many of those conducting the surveys eliminate those on their lists who have only mobiles.

We should care because the information gleaned from these surveys is then fed into data bases and compiled as knowledge bordering on wisdom. So the results are used to make decisions about services and policies that affect the way people live. But if the so-called info comes from a small sub-sample of an already shrinking sample, it may be skewed or biased. Which means the knowledge purportedly imparted is inaccurate. This wastes corporate assets and misallocates resources. It may also seriously mislead public policy makers. Since businesses and governments are financially constrained, making the wrong decision based on bad data is almost worst than trusting your instincts - or making no decision at all. JL

Will Oremus reports in Slate:
When you get a call on your cellphone from an unfamiliar number, do you answer it? If the person on the other end of the line immediately tries to assure you they’re not trying to sell you anything, do you believe them? If they tell you they’re conducting a public opinion survey that will only take a few minutes of your time, do you go ahead and start sharing your views on religion, gay marriage, the economy, and the upcoming election?

If you answered “yes” to all those questions, congratulations! You’re among the 9 percent of Americans whose opinions stand in for those of the nation as a whole in public opinion surveys.
The nonprofit Pew Research Center is one of the least biased, most reliable polling organizations in the country. When they tell you that only one-half of Americans believe Mormons are Christians, one in five U.S. adults still doesn’t use the Internet, and that a majority of Americans now support gay-marriage rights, you’d like to think you can trust them.

Recently, though, Pew decided to turn the spotlight on the reliability of its own research. What it found was alarming: Fewer than one in 10 Americans contacted for a typical Pew survey bothers to respond. That’s down from a response rate of 36 percent just 15 years ago.

Why should we care? Two of the top stories in the presidential campaign at the moment are Mitt Romney’s “woman problem” and Barack Obama’s move to support gay marriage. But do women really dislike Romney? And was Obama boldly taking an unpopular stand, or capitalizing on a shift in public attitudes? A new CBS/New York Times poll has raised eyebrows (and the Obama Administration’s ire) by suggesting that Romney actually leads among women—and that most people still oppose gay marriage.

If such polls aren’t reaching a representative subset of the populace, it’s hard to know what to believe. (Disclosure: Slate has teamed up with SurveyMonkey for a series of monthly political surveys. These surveys are different from Pew’s polls in that they’re intended to provide a snapshot of the electorate, not a scientific reading of the nation’s preferences.) This isn’t just a Pew problem. Response rates to telephone surveys—which since the late 1980s have been the standard for polls attempting to reach representative samples of Americans—have been sliding ever since people started ditching their landlines. For many possible reasons—including mobile phones’ prominent caller ID displays and the “vibrate” option—far fewer people these days pick up when a stranger calls at 8 p.m. on a weeknight. Those who do answer their cellphones are often teens too young to be eligible for the polls. And when they do pick up, they’re less likely to hand the phone off to an adult in the household.

Survey outfits’ initial response to the cord-cutting trend in the early 2000s was to ignore it. But the response rates of even those who still have landlines have also dropped off of late. And besides, it soon became clear that calling only landlines created serious problems with their data. Landline surveys, for example, reach more Republicans than Democrats. Given that polls are often judged on their resemblance to actual election results, such findings gave organizations plenty of incentive to bring cellphones into the mix, despite the added hassle and expense. The best pollsters now carefully weight their calls between landline and mobile phones to match their prevalence in the population as a whole. (Though there’s no public cellphone directory, wireless providers make their lists of active numbers available to polling organizations.)

But that hasn’t fully solved the survey nonresponsiveness problem. To understand why, consider the difference between these two statements:

1) One in five U.S. adults doesn’t use the Internet.

2) Of the 9 percent of U.S. adults who respond to telephone opinion surveys, one in five doesn’t use the Internet.

The second sounds less definitive, right? But how much less definitive? We don’t know. And that’s the root of the problem.

A lower response rate, on its own, doesn’t necessarily imply flawed results. In a widely cited 2006 paper, University of Michigan professor Robert Groves—now director of the U.S. Census Bureau—explained how efforts to increase response rates can actually lead to less reliable data. Groves cited a 1998 study in which exit pollsters offered some voters a free pen if they participated. That increased the response rate, but for some reason, Democrats were more enticed by the pens than Republicans, skewing the results.

How can we tell if Pew’s low response rate affects its findings in a given survey? We need to know whether the 91 percent who decline to respond differ in their attitudes from the 9 percent who participate. Pew’s new meta-survey attempted to find out. It did this by comparing the results of one of its own telephone surveys to the findings in benchmark U.S. government surveys, which have response rates of 75 percent.

Some of the discoveries were comforting. It seems that Republicans and Democrats are equally disinclined to take Pew’s calls, diminishing the risk of misgauging a presidential race. And Pew’s respondents are registered to vote at about the same rate as the wider population.

But there are some striking differences. Of the people who respond to Pew surveys, 55 percent said they had volunteered for an organization in the past year—more than twice the percentage among respondents to the government’s surveys. Fifty-eight percent said they had talked with their neighbors in the past week, compared to 41 percent of those reached by the government. And Pew respondents were more than three times as likely to have contacted a public official in the past year.

Those results make some intuitive sense. Survey respondents tend to be people who are more inclined to share their time with people they don’t know well. Still, Pew was wise to ask these questions: It now knows to look out for these biases in future survey results. (The government, for its part, must be careful to account for the fact that its respondents are disproportionately U.S. citizens, presumably because noncitizens aren’t eager to talk to government officials about their immigration status.)

There were also some oddities in Pew’s results, however. Its respondents are somewhat more likely to be smokers, more likely to be on food stamps, and—perhaps surprisingly—more likely to be Internet users. Groves, the census director, tells me that telephone surveys also tend to reach fewer urbanites and fewer people who live alone. The differences aren’t huge, but they’re enough to make anyone wonder: Just what other attributes might differentiate survey respondents from the average American? On most issues that Pew studies, we may never know, because there aren’t government benchmarks for, say, the number of people who would consider voting for a Muslim congressional candidate.

“Nonresponse bias,” as this is called, is just one of the many types of bias that can skew poll results. There’s the famous (and controversial) “Bradley Effect,” in which white respondents overstate their willingness to vote for a black candidate when they’re on the phone with another person, because they don’t want to seem racist. Polls conducted by snail mail may overrepresent the elderly. (That’s called “coverage bias.”) And there are all the ways in which the framing of a question can affect people’s answers. A classic example: People are much less likely to favor cuts to U.S. foreign aid spending if told ahead of time that it makes up just 1 percent of the federal budget. (The average voter believes it’s more like 25 percent.)

To be clear, all of these are distinct from your run-of-the-mill sampling error—the small degree of inaccuracy inherent in any poll administered to only a portion of a population. Sampling error, in contrast to nonresponse bias, is easily estimated (and typically reported along with results).

For what it’s worth, surveys today are probably more reliable on the whole than they were 30 years ago. Back then, most reputable polls were conducted door-to-door, because landline phones weren’t yet prevalent enough to provide a representative sample. Perhaps, then, the 1990s and early 2000s—when the best public-opinion surveys served as incredibly reliable proxies for the views of the populace—were a short-lived golden age in American public-opinion research. The ability to reach almost every U.S. household might have been a historical anomaly, a fortuitous side effect of a particular set of technological circumstances.

Can polling ever get back to that level of reliability? Perhaps. Michael Dimock, associate research director for the Pew Research Center on People and the Press, admits that technological flux is always a problem for pollsters, but believes they find ways to adjust eventually. Meta-studies like Pew’s are a good first step. And Robert Groves, the census director, tells me improvements in Internet-based data mining are already helping to fill in some of the gaps in traditional sample surveys. A phone survey focusing on the number of people who want to sell or buy houses, for instance, can now be complemented by data from Realtor.com. “The big story is that how we measure humans is undergoing fantastic changes,” Groves says. “We’re in the middle of the revolution.” Will that revolution be the death of pollsters? Survey says: too close to call.

0 comments:

Post a Comment