# Counterintuitive results

Counterintuitive results

I have a conjoint exercise with 9 attributes. The importance of each attribute is listed below:

Importance
Att1    14%
Att2    7%
Att3    8%
Att4    5%
Att5    6%
Att6    5%
Att7    11%
Att8    7%
Att9    38%

The average utilities for the last attribute (Att9) are:

Average utilities (diff)
Level 1    83
Level 2    160
Level 3    172
Level 4    239

The current market is:

Att1 Att2   Att3 Att4 Att5  Att6 Att7  Att8    Att9
Product 01    3    2    4.5    1    1    20    7    1.5    4
Product 02    4    2    2    1    1    20    7    1.5    4
Product 03    3    2    1    2    2    N/A    N/A    3    4
Product 04    4    2    1    2    2    N/A    N/A    3    4
Product 05    4    2    2    1    2    N/A    N/A    3    4
Product 06    3    2    4.5    1    2    N/A    N/A    3    4
Product 07    1    2    2    1    2    N/A    N/A    3    4
Product 08    1    2    4.5    1    2    N/A    N/A    3    4
Product 09    3    2    4.5    1    1    20    7    3    4
Product 10    4    2    2    1    1    20    7    3    4
Product 11    1    2    2    1    2    N/A    N/A    1.5    4
Product 12    1    2    4.5    1    2    N/A    N/A    1.5    4

The sensitivity shares for the first product, for the last attribute are:

Level 1    Level 2    Level 3    Level 4
Product 01    19        28            32            13

Question:
How comes that on average the utilities show that the last level of the Att9 is the most important and should gather the most share, but when I run the sensitivity analysis on this attribute the worst product is the one with level 4 on attribute 9?

Level 4 of Attribute 9 might not be selected much together with Product 01, but with many of the other products. That would cause it to behave different from what its average indicates...

Do you observe this behavior with all the other products?
answered Apr 6, 2012 by Gold (16,980 points)

Level 1    Level 2    Level 3    Level 4
Product 01    19    28    32    13
Product 02    20    28    32    15
Product 03    11    23    27    13
Product 04    10    20    24    3
Product 05    16    25    31    20
Product 06    15    24    30    10
Product 07    12    18    22    10
Product 08    9    15    18    0
Product 09    17    25    27    3
Product 10    18    25    29    6
Product 11    15    21    24    7
Product 12    11    18    21    1
hmm, could you please explain what you mean by "Average utilities (diff)" above?
You are lining all the base case products (12 of them) up on level 4 of the last attribute.  Thus, when each product is simulated with sensitivity analysis, and moves off of level 4 for attribute 9, there is a good improvement because this product is viewed as being unique on attribute 4, but all other competitors are competiting with one another on attribute 4.  It's likely you are using Randomized First Choice, which penalizes products for similarity.  So, the rewards for becoming unique on attribute 9 are outweighing the penalties for becoming worse on that attribute in utility.  If attribute 9 is price, then it would be better for you to follow the recommendations in the manual to turn off the correction for similarity for the 9th attribute (you do this on the Scenario Specification + Method Settings), if using SMRT Simulation software.  For price, correction for similarity would seem less supported than for another attribute like brand, color, or style, where similarity would be likely to lead to greater substitution.
Tank you for your answer, it was very helpful, but the method I am using for simulation is First Choice because the CBC design was alternative specific and i can't use Randomized Firs Choice method. Is there a way to turn off the correction for similarity for the First Choice method?

Thank you,
Eugen
Eugen,

Very interesting.  Because of the way you are lining up all the products (12) on level 4 of the last attribute, but then conducting attribute sensitivity on that last attribute, you are still seeing the "uniqueness reward" happening with First Choice simulations.  That's due to noise at the individual level on utilities, and was part of the topic presented at a recent "Turbo CBC" meeting by Kevin Karty.  In other words, there are respondents with reversals in that 4th attribute, and they cause the unexpected lift when the product has worse levels for that attribute.

The issue is that noisy estimates at the individual level can cause this phenomenon.  One way to try remedying the problem is to re-estimate the utilities for the attributes that have known utility order (such as your attribute #4) using utility constraints (monotonicity constraints).  If this is a CBC study, you can impose the utility constraints within CBC/HB.  If it is an ACA study, you can impose the utility constraints using ACA/HB.  If it's an ACBC study, then the HB utility estimation within ACBC will also allow you to do it.