This is the third part of a series on research issues affecting Hawai‘i users. It addresses what are called “multi-mode surveys”. The multi-mode surveys collect data using two or more different methods. Results are combined to form a single dataset of analysis.
Long before cell-phone-only households began to siphon off survey respondents, researchers noticed a steady decline in telephone response rates across the nation. We can trace this phenomenon back to the 1970’s but the rate of decline took a turn for the worse 2000.
The last time the research industry changed methods was in the early 50’s when we began to move from house-to-house surveys toward telephone surveys. One method got old and very expensive, and just short of unfeasible. The new method had problems, but eventually became the bread-and-butter of the industry. Telephone surveys are currently being threatened by a series of major issues, and some observers are looking for another change in method.
That can happen in one of two ways. First a new miracle method might appear that can reach 95-plus percent of the consumer public, achieve 80-plus percent response rates, accommodate any kind of questions (written word, audio, video, etc.), and be available at 10 to 20 percent of the cost of a good telephone survey. A lot of people felt that Internet surveying was going to be that miracle. It came up short on several of the criteria and most experts today agree that a single-method solution to rapidly declining telephone survey response rates is not likely to occur.
Second, phone surveys may be replaced by a combination of data collection methods. This outcome is more likely, but it has not been without its problems. The first and still best examples of multi-mode surveys as a solution to coverage and response problems are the address-based systems that use the National Council on Quality Assurance (NCQA) and the Census Bureau’s American Community Survey (ACS).
The basic NCQA model is mail-mail-phone. Actually that translates into an introductory letter, a mailed survey, a reminder card, a second mailing to non-respondents, a second reminder card, and a telephone survey conducted among the remaining sample members. The method was developed by academicians after years of dedicated research. NCQA does surveys among patients, doctors, and insured persons. So they have good lists of names, addresses, and phone numbers. The rest of us are often not so lucky.
The ACS model is more intensive. It starts with a letter, then follows with a mail survey, another mail survey, a telephone survey, and finally, an in-household interview with those who have not responded to the other methods. ACS boasts overall response rates in the mid-nineties. Of course they also have the power of law on their side. You can go to jail for failure to respond.
There have also been attempts to add e-surveys to the multi-mode list of methods. NCQA had one for a while and then dropped it. One tactic that has worked is to build in a web-front-end on a multi-mode survey, giving respondents the option to do the survey by mail or web.
It would be possible to build cell-phones into the method. There are no proven protocols for this as yet, and we personally doubt its effectiveness. You get very little value out of the additional surveys, and the cost is quite high.
The biggest problem with multi-mode surveys is mode effect. From the early years of Internet surveying, researchers began to notice that web surveys and phone surveys had different results. It wasn’t just the sampling problem or the coverage issues. We got a chance to do our own research on the issue, and found that even the same individual gave different answers to the same question using two different methods. That’s a mode effect. Results are directly related to the survey mode itself. Most of the research, including our own, suggests that one major reason for mode effects is the difference between the way human beings process information received by eye and by ear.
It seems that when a person hears a question like the one in Figure 3, we tend to hear two numbers – the endpoints – and those two numbers are emphasized. Some respondents tend to use the endpoints exclusively. When we see the scale, on the other hand, humans tend to start from the middle and work outward. Midpoints are used much more often. This has nothing to do with the subject matter of the question, just the way the item is presented, perceived and processed.
Multi-mode surveys represent a superior method of conducting research. They solve a lot of problems that plague other methods in this day and age. But they carry their own issues. Very recently, problems associated with mode effects have been identified and solved. Our own research, for instance, found that mode effects are limited to certain types of questions, mostly scales and socially acceptable response bias items. These questions types can be avoided or redesigned to solve mode-effects problems. The method known as total survey design can be applied to the survey instrument and analysis to tests for and adjust most mode effects.
The larger issue is cost. It’s appropriate to think of multi-mode surveys as the “Cadillac” of data collection methods. Costs can be as high as five times the cost of a phone survey for ACS types surveys. Even the simplest multi-mode survey will be up to 25 percent more expensive.
We expect that multi-mode surveys will continue to be applied to the largest and most demanding survey work in Hawai‘i. Most of our larger accounts, especially for ongoing government work, are moving in that direction. The potential for multi-mode surveys on smaller, ad hoc projects is limited. The cost is simply too high.
We have budgeted some applications that showed only a very slight increase in cost, so multi-mode surveys are always a possibility. You can always ask us about them, and we will recommend them whenever we think the cost is justified.
Taken together, recent changes in the way researchers do our work may have very significant effects on your work. It’s fair to ask us what we are doing about the big problems in our field, and we will be pleased to provide you with options and alternatives to getting the highest quality information possible.
We expect some of you will have questions. You can contact Jim Dannemiller at 440-0701, or ask any of our staff. We are ready to discuss how the issue affects your information needs, and suggest ways to solve the problem.
SMS did a great job in leading Hawai‘i Tourism Authority’s evaluation development process…including identifying input, output and outcome measures to track progress toward HTA’s strategic goals. SMS also provided steps to guide us in integrating the evaluation process into our overall operations and planning. Our staff was actively engaged in the process and tracking the measures will ensure a focus on driving towards outcomes.
— Leslie Dance, former VP Marketing
Hawai‘i Tourism Authority