Berkeleyan
New study highlights survey vulnerabilities
Respondents queried about their dicey behavior may be more likely to level with a computer than with a human
| 16 February 2005
Some 8.3 percent of California teens smoke cigarettes. No, wait, make that 4.5 percent. Or is it really 14.2 percent?
It can be hard for public-health researchers, much less the public, to know what numbers to believe when surveys come up with such varying results. Epidemiologists are often faced with this challenge — as are politicians or any group that relies upon polls, questionnaires, and surveys about people’s behavior or beliefs.
Joel Moskowitz, director of Berkeley’s Center for Family and Community Health, highlights this problem in a new study assessing two different survey methods designed to estimate the prevalence of teen smoking in California. The paper, published in the Winter 2004 issue of Public Opinion Quarterly, tested a relatively new telephone survey method and compared it with an existing one. Moskowitz found the two methods yielded significantly different results.
Both methods involved interviews with adolescents 12 to 17 years of age in California in a survey conducted by the Gallup Organization in 2000. Telephone interviewers contacted a random sample of households and asked permission to interview any adolescents at home. Half of the adolescents were randomly assigned to complete the survey with a computerized self-interview, and the remaining half completed the survey using the standard interviewer-administered method. More than 2,400 interviews were completed; the overall survey response rate was 49 percent.
In the telephone computer-assisted self-interviewing
(T-ACASI) method, participants listened to pre-recorded, computer-controlled questions and responded by pressing the keypad on a touch-tone telephone. In the computer-assisted telephone interviewing (CATI) method, interviewers asked the questions and entered responses into a computer. The questions were the same in both surveys.
The automated T-ACASI survey resulted in an estimate of 8.3 percent of teens who reported smoking in the prior 30 days. In comparison, the CATI survey yielded a significantly lower estimate, 4.5 percent, of current teen smokers. Both those figures are lower than the 14.2-percent prevalence found in a school-based survey of California teens conducted in 2000.
School-based surveys are the most common tools used to estimate behavior among adolescents and children, says Moskowitz. But such surveys miss many high-risk youth who are often not in class and may be more likely to smoke or do drugs.
Still, the students who are in school for a survey may feel more open about reporting high-risk behavior there than they do at home because they are in a setting with peers, said the report.
Mom’s listening? OK, I never touch the stuff . . .
Notably, 59 percent of respondents in the CATI survey reported that a parent could hear all or part of the interview, compared with 42 percent of respondents in the T-ACASI survey. The perception of a lack of privacy could have led to lower estimates of smoking in both telephone surveys, says Moskowitz.
Yet phone interviews have increased in popularity over the years, as nearly all households in the United States now have phones, according to the report. Calling homes may reach some of the high-risk youth that school-based surveys do not. Phone surveys also tend to be more cost-efficient.
At the same time, response rates for phone surveys are on the decline with the advent of answering machines and telemarketing, as people screen calls to their home phones, notes Moskowitz. In addition, people with cell phones are becoming less dependent on their land lines. When people do respond, their answers vary depending upon the survey method used.
“In the phone survey where respondents spoke with a live interviewer, they may have underreported their smoking behavior because of a perceived lack of confidentiality, even though the two survey groups were equally anonymous,” says Moskowitz. “The youths may have felt more comfortable revealing their smoking habits to a computer rather than a person.”
People also tend to under-report beliefs or behaviors, such as smoking, that go against what is considered “socially desirable,” especially if they are interacting with an interviewer. “That’s also why people tend to overreport desirable behavior, such as voting frequency,” he says.
A pre-recorded, computer-controlled system takes the human interviewer out of the equation, which could potentially help when surveys are targeting high-risk behavior such as substance abuse, he said.
Pollsters are aware of these drawbacks, and statisticians have come up with methods to compensate for the potential biases. For instance, researchers weight the responses from groups that have a low response rate in an attempt to reflect their true representation in a population. However, that procedure has its downsides.
“Weighting the data can sometimes do more harm than good,” says Moskowitz. “You’re assuming that the people who respond to the survey are representative of the non-respondents, and that may not always be the case.”
Despite the concerns, Moskowitz sees the necessity of surveys and polls. However, he urges people to take a closer look at how surveys are conducted before accepting the results. “These findings underscore the need for people to be wary of survey results, because so much depends upon the methodology used,” says Moskowitz. “It’s important not to take survey results at face value.”