Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

ACBC Default Values for Linear Price Coding in HB-Analysis

Hi everyone,

I've got around 250 respondents of an ACBC-survey. I have five non-price attributes and use a summed pricing approach with a base price (80) and different prices per attribute level (steps of 5, ranging from 0 to 15, positive and negative, +/- 30 % variation).

Now, when analysing the data the software sets default price values for linear coding of the price function. Those values are -52 and 19.5, which is suprising since respondents where shown prices between 2 and 265 (according to the counts analysis).  If I set the price values to 2 and 265, the relative importance for price jumps from around 25 % to around 50 %. Same effect occurs for piecewise coding (which I would in fact prefer, because I identified a breakpoint at 50).

For both ways of price coding, the average utilities are quite similar, pct. certainty is close to .5, RLH aorund .65 and predicition of holdout tasks is basically the same. Only the None-Parameter jumps from -100 (linear) to +60 (piecewise). So I am wondering:

1) From where are the default prices taken?
2) Is there an explanation for the big jump in relative importance for price and the None-Parameter?

Thank you very much!

Best regards,
asked Apr 26, 2018 by Thomas (150 points)

1 Answer

+1 vote
Our software makes a guess about the theoretical possible minimum and maximum that could be shown to respondents, and sets those as the two default endpoints in that pricing grid for utility estimation.  But, it sometimes is wrong in that default guess, as your analysis via counts shows.  No matter.  Just change the endpoints to encompass the full possible range that was actually shown to respondents.  And, prepare for price to be extremely important (in terms of the importance calculation, because the way you have set up your experiment, price takes into account the full possible range of prices that could be implied by changes to all the other attributes.

I like piecewise price functions as well.  

If you fail to constrain price (negative), then the None parameter can wander around quite a bit, since with piecewise price function the implied utilities for the price points are not zero-centered in each iteration.  With constrained price, the convergence of the None parameter as well as the implied price utilities at the different cut and end points tends to be more stable.  I see this all the time in the HB history of iterations plot.   But, even with constraining piecewise price as negative, the implied utilities of the cut and end points still does not average to zero, so the None parameter needs to adjust with a shift in response.  The predictions are still appropriate, as you've seen.  The implied None choice % should remain appropriate too.

Of course, don't constrain price unless you are absolutely sure respondents would always prefer lower prices over higher prices (all else held equal).  Sometimes with some product categories where price can be a signal of quality, if price becomes too low, utility actually can fall (holding all other features equal).
answered Apr 26, 2018 by Bryan Orme Platinum Sawtooth Software, Inc. (201,265 points)
Hi Bryan,

thank you very much for clarifying those default values. I'll go with piecewise coding then.

Is there any valuable insight to gain if I exclude the price from the analysis? I am thinking about the relations between non-price parameters, because with piecewise coding 4 of the 5 non-price parameters are quite close together in terms of relative importance  (ranging between 10 and 14 %). So it is rather hard to tell, if those parameters are equally important to respondents or in fact the 14 % parameter is quite more important than the 10 % one, but because of the effect of the price it is not possible to tell.

What still puzzles me is that all but 4 of 20 parameter estimates are converging quite quickly. Those 4 don't seem to converge even after 100k iterations. Do you have any idea what the problem might be or how I should handle that?

Thanks again!

Of course, Price should always be estimated as part of the model.  You always want to be able to explain the variance.

Importance scores are strange things.  They just take into account the full range in utility (best minus worst) per person.  Even if an attribute is truly unimportant and ignored by respondents, just due to random noise the difference between best and worst levels of that attribute must be positive.  For that reason, many top conjoint researchers don't pay much attention to importance scores.  Rather, they conduct market simulations and examine how changes to attributes levels for the base case product (vs. its base case competition) lead to changes in predicted shares of preference.

The predicted shares of preference from a market simulator also have standard errors associated with them.  So, there are ways to conduct statistical testing to see if changes to one attribute lead to a higher share of preference than changes to another attribute.

Remember that with piecewise coding in ACBC, the None parameter & the price beta estimates as each cut-point and end point will not seem to converge.  This is expected, due to the way we're doing the coding and the way that piecewise coding leads to some positive correlation among the coded price parameters in the design matrix.  A constant could be added to the implied price part-worths (to all of them) and that same constant could be added to the None parameter, without any change in fit to the choices.  That's because the constant factors out in the logit equation.  

If you are worried about whether there is good convergence in your data, you could temporarily do a test: use a linear specification for price rather than part-worth.  Run it out 100K iterations and examine the history of the estimates in the HB plot.  Then, you'd probably see that things seem to converge.  Once you've proven that to yourself, then you could go back to the piecewise specification of Price and re-run the model (then, seeing the parameters for price part-worths and the None looking like they wander a bit....but that's due to the issue I mentioned above) to obtain the benefits of the piecewise price specification.