Surveys can take all shapes and sizes. And like any data collection, they require adaptability throughout the process. Two statewide surveys of the same number of people illustrate how surveys look differently—and provide different insights—based on your core principles.
We often turn to surveys when existing data doesn’t meet the needs for an evaluation. We know, however, that with surveys inundating us from all sorts of nonprofits, consumer businesses, and government agencies, response rates are declining across the board. On top of that, some groups are more likely than others to respond to surveys.
We at The Improve Group often talk about logical fallacies, the little tricks our brain plays on us while we try to make sense of the world. One particular trick we pay attention to is the “planning fallacy”—the idea that humans are not very good at estimating the time or effort it will take to perform a task. If we believe this idea to be true—that our plans will fail at some point—it is critical to have a clear set of core principles guiding your data collection.
How does this come into play with surveys? While your population of interest, the context they live in, and many other factors can require adaptability during data collection, your core principles are clear values that stay the same throughout and guide these course corrections. Are you looking at your work with an equity lens? Are you most focused on statistical reliability? Whose voices are the most important to hear? Answers to these questions can help you articulate your survey’s non-negotiables.
In one example, a survey for a state agency, we were aiming for statistical representation within a modest budget. To reach people statewide, we took a traditional approach: we drafted the questions and got feedback, bought a list of names, did a series of mailings and reminders, and followed up by telephone as needed. While our results were statistically representative, our budget constraints meant we could not do supplemental outreach to community members who were unlikely to respond to these traditional methods. This shows how statistically representative does not always mean the same thing as being community-responsive.
In comparison, our Olmstead Quality of Life Survey of Minnesotans with disabilities had a budget 10 times the size as the first example. An important part of this survey was being person-centered, for example, conducting the surveys of individuals in the way that made the most sense to them. As you can imagine, this added up to a lot of costs for time and travel, but we maintained our core principle of person-centeredness and better representing the variety of experiences within this population.
This will be the topic of an Improve Group presentation next month at the Community Indicators Consortium—come by if you’re attending the conference!