Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Price sensitivity in conjoint

In some studies where we know (from historical data) that price elasticity is low, the conjoint results shows a high price elasticity.  What do you believe could be the reason for this, and what would you all do in this case?

My thought is that if we do not have many non-price attributes in a CBC study, the price attribute become very apparent in the exercise and this may be why the result shows a higher price elasticity.  In ACBC studies, if the price range is too large, sometimes you might see this as well.  Thoughts?
asked Jun 16, 2014 by HongHu Bronze (835 points)
retagged Jun 16, 2014 by Walter Williams

1 Answer

0 votes
We know that the response error in conjoint experiments is typically lower than shown in real-world purchases.  Things are laid out on the survey screen very clearly, with clear descriptions, in well-organized grids of information.  We give the respondent practice and repetition within our conjoint survey environment.  That is not so in the real world, where many other forces are at work to muddy the choices: out-of-stock conditions, influence of others, variety-seeking, satisficing behavior (not taking enough time to make a utility-maximizing decision...many men do this while shopping), advertising and promotions at the point of sale.

More than a decade ago, Greg Rogers & Tim Renkin looked at a lot of P&G sales data vs. CBC results.  They built economic models based on actual sales data and they compared it to CBC results.  On average, across multiple categories, the price sensitivity estimated from CBC was about the same as the price sensitivity they got from their economic models.  But, they found that some categories of fast-moving-consumer-goods (FMCG) were consistently missing with price sensitivity too high for CBC relative to economic models and some were consistently too low.  (My language here assumes the economic models from sales data are truthful...and maybe that's not the case).

Anyway, other researchers like Jordan Louviere report that he frequently needs to adjust CBC model sensitivity lower (sensitivity on all attributes, not just price) by multiplying all the part-worths by a factor such as 0.4 prior to using the logit rule to predict probabilities of choice (we call this adjusting the Exponent by 0.4 in our market simulator).  That is not to say that 0.4 is the justified Exponent in all cases...just a generalized suggestion.

Also, with ACBC, I have found many times that the implied error rate of the part-worths (the scale factor) is even larger than for CBC (meaning the error rate is even lower for ACBC).  This means that maybe if the appropriate scale factor adjustment is around 0.4 for CBC, maybe it is 0.2 for ACBC.  I'm speaking in generalities, but I hope the idea is coming over.  Adjusting the Exponent (scale factor) in the market simulator makes the sensitivity of the probabilities greater for all attributes, not just price.

Rich Johnson would recommend that reserachers add another "decoy" attribute in CBC studies more than just brand and price.  That way, responents would not think that the entire study was just about brand and price.
answered Jun 16, 2014 by Bryan Orme Platinum Sawtooth Software, Inc. (198,815 points)

We typically have quite a few non-price attributes so the last recommendation would not help.  However, we have indeed started to experiment with the exponent adjustment factor.  A couple follow up questions on this: 1. Why do we need to adjust all attributes, not just price? 2. How should we determine what is an appropriate adjustment factor?  In addition to the rule of thumb (0.4 for CBC) is there anyway we can tell that the model adjustment is good or not?