Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

None in ACBC/ High share of preference of None in ACBC study


I am working on a ACBC study (4 different conjoints) and had a few queries. It would be great if somebody could help me understand these points.

1. While I do understand that 'None' parameter in ACBC is calculated by responses in Screener section (do not have any calibration section in my study). The raw utilities for this parameter are high in magnitude (average 3.5-4), resulting in high share of preference (>50%). Is this an anomaly?, What can I do here?

2.  I am using a piecewise function for summed price for each conjoint (have tested range -30% to 40%) and have looked at counts for prices to decide breakpoints. Unconstrained utilities (with about 70k iterations)  show a increase in average price utility from lowest extreme to a breakpoint (~5%)  and then decreases with increase in price. Should I apply constraints and run an iteration so that it follows natural order?

3. Given the two points above, when I run share of preference model in the  excel simulator, the share of preference  does not fluctuate a lot within tested range (from -30% to -40% summed price, hardly moves from 33% to 45% over this range).  Was expecting a constant yet substantial decrease in share of preference as the price increases.

Would be great to hear your thoughts and possible steps/checks.
asked Jul 22, 2020 by Prakhar

1 Answer

+1 vote
1) As a good starting point, you probably want to run your counts analysis and see how many things respondents were saying would work for them in the screener section.  The screener section is essentially coded up like a choice between the profile shown and the none option.  If your respondents are rejecting a majority of the screening concepts, then it would seem correct that you would have a high none utility.

2) Typically constraints are a good idea when using many breakpoints as individual respondents may have not had much data within your breakpoints, especially if they are very picky during the screener section.  As price increases, we would expect utility to decrease, so the slope should have a negative direction.  

3) This one is a bit trickier to speculate on, as #1 and #2 might have a big impact on your simulations (if only 1/3 of the people are choosing something and everyone else chooses none, then they just might simply be price insensitive, but it could also be your model is misbehaving a bit from #2)
answered Jul 22, 2020 by Brian McEwan Platinum Sawtooth Software, Inc. (56,045 points)
Other things to think about:

--If you are using piecewise price utilities, then the utilities for price are no longer zero-centered, so your None utility may need to find a new constant per respondent to compensate.  None utilities can wander around a lot, I've found, if using piecewise functions.  But, the predictions after summing utilities across alternatives and exponentiating turn out right.

--Sometimes I've seen price functions (from lowest to highest prices) increase until a point, then start decreasing.  For example, eye surgery might have a low utility at a really low price (people don't trust the surgeon charging so little), but then increase in utility to a point on the price continuum, after which it decreases as it becomes too expensive.  So, be careful to watch for this and think about whether you should impose price constraints when the demand curve turns out to follow the not-expected pattern for a substantial number of your respondents.
Thanks Bryan!

I have checked the selections in Screener section and for a respondent, nearly 60% of the times (out of all) a concept is marked a possibility, hence around 40% it's not. This is my experience is much higher than traditional CBC None (although I do know that there are different interpretations of None in both types of conjoint). Could this be a factor in lower preference share?

Also, on the piecewise price function, does imposing constraints on price further stabilizes the None parameter compared to the unconstrained one. I have 270 respondents in a study and I have ran multiple iterations (constrained) 80k, 100k, 120k. But the None parameter still has high magnitude (2-3, though lower than unconstrained). Would you suggest to further increase the iterations?

Would be great to hear your thoughts.
Hi Bryan, just one more question. While the recommended approach for summed price in ACBC is piecewise coding, would the results change drastically if we use linear coding (with or without constraints), right now we observe minimal change in share of preference over the tested price range (shock, -30% to 40%) suggesting that either the respondents are not price sensitive or could be use of current price coding. Just want to check different options. Your thoughts on this are most welcome.
If the graph while running HB is pretty steady in its up and down movements, then increasing the number of iterations is unlikely to change anything.

I think we do tend to see a higher none parameter in an ACBC, since the survey allows respondents to identify things as must-have and unacceptable, so the survey in essence allows the respondents to be more picky.  So, I don't think I would view a higher none option as some sort of mistake.  

You are definitely welcome to run the models with different settings and compare.  I think generally piecewise coding tends to fit the data a bit better because it's more flexible, as a linear coding would force the same change in utility along the entire price range.  But, it should just be a few minutes with 270 respondents to do another utility run and compare the results.  My guess is linear coding will not make much of a change.

Are you fit statistics good?  You might be able to reassure yourself by making a copy of the survey, deleting the real data, and using the Data Generator under the Test menu to simulate maybe 100 or 200 respondents (this will take a while, since the survey has to actually run for the testing, so maybe do it on a lunch break or overnight).  You can then run an HB model on the random answers and see what kind of fit you get and compare that to the real data.
Thanks Brian! The results do not change much.

How should I interpret this high None parameter/share of preference for None (50%-55% with just one product switched on, which has a ~40% share of preference), keeping in mind I did not have any competition products in the exercise.
I would still do the comparison against random respondents, and if your fit for real respondents is higher than the random, I don't think there is much more to worry about and these respondents in this exercise are perhaps pickier than past projects.  Remember that it might be a bit too simplistic to simulate just one product versus the none choice, as typically in the real world people would have more than one option to choose  among.
Thanks Brian.

As I am doing further analysis, I am observing a change in attribute importance for the other attributes (believe this is bound to happen as price utility range gets condensed).

Further, it seems after the application of price constraints   the (level) importance of last levels for most of the attributes has increased (last level represents higher no. of features for all my attributes, and hence it will have a higher price than other levels), whereas with unconstrained estimation, level importance of last attribute was lower than other levels of the attribute. Is this a consequence of price constraints? What can be done to check and ways to correct this. Thanks in advance.
is there some way to do post-hoc correction?
If there are reversals, than constraints will result in a utility change for respondents.  Attribute importances are a simple calculation done on the ranges of the utility scores for a respondent, so if the constraints change either the top or bottom utility scores for any respondent, then their importance scores have to also change (see https://sawtoothsoftware.com/support/technical-papers/general-conjoint-analysis/interpreting-conjoint-analysis-data-2009 for more information).

The constraint should typically be viewed as the correction to the model, while average utility and average importance are ways of trying to summarize what's going on in people's models.  Since importance is derived from the utility scores, I can't think of any way to "correct" them since they are just summarizing what's going on with the utilities.