Motivation for Adaptive CBC

Top  Previous  Next

Background

 

Choice-Based Conjoint (CBC) has been the most widely used conjoint technique among our user base since about the year 2000.  The marketing research community has adopted CBC enthusiastically, for several reasons.  Choice tasks seem to mimic what actual buyers do more closely than ranking or rating product concepts as in the original conjoint analysis as developed in the 1970s.  Choice tasks seem easy for respondents, and everyone can make choices.  And equally important, multinomial logit analysis provides a well-developed statistical model for estimating respondent partworths from choice data.

 

However, choice tasks are less informative than tasks involving ranking or rating of product concepts.  For this reason, CBC studies have typically required larger sample sizes than ratings-based conjoint methods.  In CBC studies, the respondent must examine the characteristics of several product concepts in a choice set, each described on several attributes, before making a choice.  Yet, that choice reveals only which product was preferred, and nothing about strength of preference, or the relative ordering of the non-preferred concepts.  Initially, CBC questionnaires of reasonable length offered too little information to support multinomial logit analysis at the individual level, unless the number of attributes and levels was kept at a minimum.  More recently, hierarchical Bayes methods have been developed which do permit individual-level analysis, but interest has remained in ways to design choice tasks so as to provide more information.

 


Problems with Traditional CBC

 

In recent years marketing researchers have become aware of potential problems with CBC questionnaires and the way respondents answer CBC questions.

 

The concepts presented to respondents are often not very close to the respondent's ideal.  This can create the perception that the interview is not very focused or relevant to the respondent.

 

Respondents (especially in Internet panels) do choice tasks very quickly.  According to Sawtooth Software's experience with many CBC datasets, once respondents warm up to the CBC tasks, they typically spend about 12 to 15 seconds per choice task.  It's hard to imagine how they could evaluate so much information on the screen in such short order.  It seems overwhelmingly likely that respondents accomplish this by simplifying their procedures for making choices, possibly in a way that is not typical of how they would behave if buying a real product.

 

To estimate partworths at the individual level, it is necessary for each individual to answer several choice tasks.  But when a dozen or more similar choice tasks are presented to the respondent, the experience is often seen to be repetitive and boring, and it seems possible that respondents are less engaged in the process than the researcher might wish.

 

If the respondent is keenly intent on a particular level of a critical attribute (a "must have" feature), there is often only one such product available per choice task.  Such a respondent is left with selecting this product or "None" and respondents tend to avoid the "None" constant, perhaps due to "helping behavior."  Thus, for respondents intent on just a few key levels, standard minimal overlap choice tasks don't encourage them to reveal their preferences much more deeply than the few "must have" features.

 

Gilbride et al. (2004) and Hauser et al. (2006) used sophisticated algorithms to examine patterns of respondent answers, attempting to discover simple rules that can account for respondent choices.  Both groups of authors found that respondent choices could be fit by non-compensatory models in which only a few attribute levels are taken into account.  We find that when choice sets are composed so as to have minimal overlap, most respondents make choices consistent with the hypothesis that they pay attention to only a few attribute levels, even when many more are included in product concepts.  In a recent study with 9 attributes, 85 percent of respondents' choices could be explained entirely by assuming each respondent paid attention to the presence or absence of at most four attribute levels.

 

Most CBC respondents answer more quickly than would seem possible if they were giving thoughtful responses with a compensatory model (a model that assumes that respondents add the value of each feature and choose the product concept with the highest overall utility).  Most of their answers can be accounted for by very simple screening rules involving few attribute levels.  Combine those facts with the realization by anyone who has answered a CBC questionnaire that the experience seems repetitive and boring, and one is led to conclude there is a need for a different way of asking choice questions with the aim of obtaining better data.

 

There has been a lot of effort dedicated to designing efficient CBC experiments featuring orthogonality and high D-efficiency.  These efforts have assumed that respondents answer using an additive process consistent with the logit rule.  We have become increasingly convinced that most respondents to complex conjoint studies employ non-compensatory heuristics at odds with the logit rule, and that efforts to improve design efficiency assuming compensatory processing may be  misdirected.  ACBC's design strategy is effective for respondents that employ various degrees of non-compensatory and compensatory processes.  In terms of the traditional design criteria, ACBC's designs would be judged inferior.  But this is an inappropriate standard given that so many respondents apply cutoff rules, such as screening based on unacceptable and must-have levels.

 


The Quest for Better Data

 

We believe CBC is an effective method that has been of genuine value to marketing researchers, but that it can be improved.  And we believe the greater need at this point is not for better models, but rather for better data.  Adaptive CBC is a promising method that responds well to the problems above.  It uses the trusted full-profile method of presenting product concepts.  Its surveys are more engaging for respondents.  Rigorous comparison studies to traditional CBC suggest that our adaptive form leads to data that are more accurate and predictive of choice behavior.  It captures more information at the individual level than traditional CBC surveys and may be used even with small (for example, business-to-business) samples.

 

Adaptive CBC is best applied for conjoint-type problems in which there are about five attributes or more.  Studies involving few attributes (such as the brand+package+price studies commonly done in FMCG research) wouldn't seem to benefit from ACBC.  The store-shelf approach in our standard CBC software would seem more appropriate for such studies.

 

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?motivationforadaptivecbc.html