Survey Quality & MaxDiff: An Assessment of who fails & why

This paper was first presented at the 2009 Sawtooth Software conference in Delray Beach, Florida by Andrew Elder, Vice President of Marketing Sciences for Illuminas in Austin, TX and Terry Pan, Illuminas Austin MS Manager. The paper was called Survey Quality and MaxDiff: An Assessment of Who Fails, and Why.  The authors use MaxDiff (a derivative of conjoint analysis) as a tool to assess respondent quality, since it implicitly measures the consistency across multiple comparisons.  Since this consistency can be standardized across topics and audiences, a meta-analysis of 8 MaxDiff studies helps to paint a more specific picture of the low-quality respondent. Based on this analysis, “speeding” through a survey and “straight lining” ratings questions are shown to have the most significant negative relationship with MaxDiff performance.   When poor performers in each task are overlaid, it becomes clear that there is a hierarchy of quality in which people who fail at multiple question types are objectively providing the lowest data quality.  However, individuals who fail at an individual task (e.g., those who “speed” without “straight lining” or poor MaxDiff consistency) perform relatively well in the remaining survey components.  These insights serve as a cautionary tale to the overzealous rejection of questionable respondents, as this risks biasing results toward those who answer questions in a particular way.  By using a variety of question types, researchers are less likely to exclude individuals who favour one response style over another.  Overall, the authors found between 1-4% of individuals justify rejection, varying by study. You can download the pdf presentation below. For more information please contact us at [email protected].

Download PDF