This option does not assume that the respondent always chooses the product with highest utility. Instead, it estimates probability of choosing the simulated product, arriving at a "share of preference" for the product.
This is done in two steps:
1. Subject the respondent's total utilities for the product to the exponential transformation (also known as the antilog): s = exp(utility).
2. Rescale the resulting numbers so they sum to 100.
Suppose two products, A and B, have total utilities 1.0 and 2.0. Then their shares of preference would be computed as follows:
product share of
utility exp(utility) preference
A 1.0 2.72 26.9 (2.72/10.11) * 100
B 2.0 7.39 73.1 (7.39/10.11) * 100
Total: 10.11 100.0
Unlike the First Choice option, the scaling of utilities can make a big difference with the Share of Preference Options. Consider what happens if we multiply the utilities by constants of 0.1 or 10.0:
For multiplier of 0.1
product share of
utility exp(utility) preference
A 1.0 1.105 47.5
B 2.0 1.221 52.5
For multiplier of 10.
product share of
utility exp(utility) preference
A 10.0 4.85E08 0.005
B 20.0 22026 99.995
If we multiply the utilities by a small enough constant (we refer to this as the Exponent), the shares of preference can be made nearly equal. If the utilities are made small enough, every product receives an identical share of preference, irrespective of the data.
If we multiply the utilities by a large enough constant (Exponent), the shares of preference can be made equivalent to the First Choice model.
It is apparent that scaling of the utilities can make a big difference with the Share of Preference Models. Most of Sawtooth Software's utility estimation methods result in utilities appropriate for use with Share of Preference models.
The choice simulator lets you scale the utilities within the Share of Preference option at the time the simulation is done. This is accomplished by a parameter called the Exponent that you can set when preparing for simulations. The default value of the exponent is 1. The exponent can be used to adjust the sensitivity of the simulation results so that it more accurately reflects out-of-sample holdout choices, or actual market behavior.
A smaller exponent causes small shares to become larger, and large shares to become smaller — it has a "flattening" effect. In the limit, with a very small exponent (near 0) every product receives the same share of preference.
A large exponent causes large shares to become larger, and small shares to become smaller — it has a "sharpening" effect. In the limit, a very large exponent produces results like those of the First Choice option.
Leading researchers tend to find that respondents make choices in conjoint questionnaires with less error than choices made in the real world, leading to market simulators exhibiting relatively "sharper" share differences. So, if the Exponent is adjusted, the typical direction is to adjust below 1.0, often to something in the range of 0.3 to 0.8. Exponent adjustments below about 0.2 (for conjoint part-worths estimated via logit-based methods) would seem extreme and point to possible problems in the data (either the part-worth utilities or the holdout judgments being used to tune the exponent).
Top N Approach for Reducing IIA Troubles
The Top N approach uses the share of preference (logit) rule, but only allocates share across the Top N products within the simulation scenario, where N is an integer specified by the researcher (and where the "None" alternative is also counted as a product). For example, the Top N approach with N=3 will only allocate share for each respondent to the top (most preferred) 3 products. As an illustration, given the following simulation scenario with five products and Top N = 3, the shares of preference for a single individual are:
Product Label |
Utility |
Exp(Utility) |
Traditional Share of Preference |
Top 3 Share of Preference |
Product 1 |
2.36 |
10.59 |
52.8 |
54.2 |
Product 2 |
1.98 |
7.24 |
36.1 |
37.1 |
Product 3 |
0.53 |
1.70 |
8.5 |
8.7 |
Product 4 |
-1.24 |
0.29 |
1.4 |
0.0 |
Product 5 |
-1.48 |
0.23 |
1.1 |
0.0 |
|
|
|
|
|
|
Total: |
20.05 |
100.0 |
100.0 |
Note that the ratios of shares of preference among the most preferred 3 products for this respondent (products 1, 2, and 3) are identical under the traditional share of preference rule or the Top 3 rule. However, only the top 3 products in terms of utility receive any share. Why the Top N approach? It can reduce the IIA difficulties (also known as the Red-Bus/Blue-Bus problem), especially in the case of very large models that involve, say, dozens or more brands (SKUs).
The Top N approach involves only a small modification to the Share of Preference logic. When N=1, results are equivalent to the First-Choice rule. When N is equal to the number of products in the simulation (plus the "None" alternative, if applicable), results are equivalent to the Share of Preference rule.
The Top N option was introduced to us by Kees Van der Wagt of SKIM Group at our 2014 Turbo Choice Modeling seminar in Amsterdam. As far as we are aware, he is the originator of this idea.
Top N is a middling position between the First Choice rule (which is nearly immune from IIA difficulties) and the Share of Preference rule (which is subject to IIA difficulties). However, unlike First Choice which is not tunable for scale (flatness or steepness of shares of preference), it is tunable via two mechanisms: the size of the N and the Exponent. The larger the N, the flatter the shares and the more subject to IIA. The smaller the Exponent, the flatter the shares and the more subject to IIA. However, the larger the N or the smaller the Exponent, the more precise the results (smaller standard errors due to extracting more information per respondent about the relative preferences for product alternatives).
Top N is a new idea, so no published research is yet available. For now, we recommend using Top N in situations where the researcher believes that the simulator is facing IIA difficulties and wishes to reduce the Red-Bus/Blue-Bus problems using a different method instead of Randomized First Choice (a much slower algorithm). In situations where the number of products in the simulation is about ten or more, somewhere between a "Top 3" and a "Top 5" rule may be a reasonable choice. When using the Top N model, you may need to reduce the Exponent (below 1.0) to produce shares of preference with similar ratio differences among them as the original Share of Preference rule.
Top N is extremely fast, essentially as fast as First Choice or Share of Preference, which gives it an advantage over the much slower Randomized First Choice Method. This would especially be an issue if performing lengthy search optimizations under the Advanced Simulation Module.
Randomized First Choice would seem to offer good correction for product similarity (reduction of IIA problems) when similarity can be represented well by simply counting the number of attribute levels that are shared across competing products. However, in situations such as two attributes: SKU and price, Randomized First Choice is ineffective and the "Top N" approach would seem to be a better alternative.
The Top N rule allocates share to the top N products per respondent, proportional to the share of preference rule, with just a couple rare exceptions:
Exception 1:
(This is a very rare exception; the most likely situation being that the user defines two identical products within the market scenario.)
Consider a Top N situation in which N=3, and a given respondent has the following utilities for five products:
A 4.0
B 4.0
C 2.6
D 2.2
E 0.9
In the situation above, products A and B are tied in utility (out to about 5 decimal places of precision). We modify the Top 3 rule in this case to first divide the share of preference among products within the Top 3 that have unique utility: A or B, and C. A and B have identical utility, so we arbitrarily choose one of them (A) to represent the identical pair. The shares of preference (after exponentiation and normalization to sum to 100%) are:
A 80%
C 20%
Next, we divide A's utility in half (since it has identical preference to B) and assign final Top 3 shares of preference as follows:
A 40%
B 40%
C 20%
D 0%
E 0%
Exception 2:
(This is a very rare exception; the most likely situation being that the user defines two identical products within the market scenario.)
Consider a Top N situation in which N=3, and a given respondent has the following utilities for five products:
A 3.7
B 3.3
C 2.8
D 2.8
E 0.9
In the situation above, products C and D are tied in utility (out to about 5 decimal places of precision). We modify the Top 3 rule in this case to first divide the share of preference among the top 3 products in terms of unique utility: A, B, and either C or D. The shares of preference (after exponentiation and normalization to sum to 100%) are:
A 48%
B 32%
C 20%
Next, we divide C's utility in half (since it has identical preference to D) and assign final Top 3 shares of preference as follows:
A 48%
B 32%
C 10%
D 10%
E 0%