Details of ACA Utility Estimation

Top  Previous  Next

The following describes ACA's method of part-worth utility computation using OLS (Ordinary Least Squares regression).

 

ACA includes two major sections: self-explicated priors and conjoint pairs.  Part-worth utility estimates for these two sections are determined as follows:

 


Prior Utilities:

 

If rank orders of preference are asked (not currently offered in ACA) we convert them to relative desirabilities by reversing them.  For example, ranks of 1, 2, and 3 would be converted to values 3, 2, and 1, respectively.  If desirability ratings are asked (only method offered in ACA), those are retained "as is."

 

The average for each attribute is subtracted to center its values at zero.  For example, desirability values 3, 2, and 1 would be converted to 1, 0, and -1, respectively.

 

The values for each attribute are scaled to have a range of unity.  For example, desirability values of 1, 0, and -1 would be converted to .5, 0, and -.5.

 

The importance ratings for each attribute are scaled to range from 1 to 4, and then used as multipliers for the unit-range desirability values.  Thus, if an attribute has desirabilities of .5, 0, and -.5 and an importance of 3, we get -1.5, 0, and 1.5.

 

The resulting values are initial estimates of part-worths, with these characteristics:

 

For each attribute the range of utility values is proportional to stated importance, and attribute importances differ by at most a factor of 9 (the maximum number of scale points that can be specified for importance questions).

 

Within each attribute the values have a mean of zero, and differences between values are proportional to differences in desirability ratings or rank orders of preference.

 


Pairs Utilities:

 

An independent variable matrix is constructed with as many columns as levels taken forward to the pairs questions.  If a level is displayed within the left concept, it is coded as -1; levels displayed within the right-hand concept are coded as +1.  All other values in the independent variable matrix are set to 0.  

 

A column vector is created for the dependent variable as follows: the respondents' answers are zero-centered, where the most extreme value for the left concept is given a -4, and the most extreme value on the right +4.  Interior ratings are fit proportionally within that range.

 

Each pairs question contributes a row to both the independent variable matrix and dependent variable column vector.  Additionally an n x n identity matrix is appended to the independent variable matrix, where n is the total number of levels taken forward to the pairs questions.  An additional n values of 0 are also appended to the dependent variable matrix.  The resulting independent variable matrix and dependent variable column vector each have t + n rows, where t is the number of pairs questions and n is the total number of levels taken forward to the pairs questions.  OLS estimates of the n attribute levels are computed by regressing the dependent variable column vector on the matrix of independent variables.

 


Combining the Priors and Pairs Utilities:

 

The priors and pairs part-worths are normalized to have equal sums of differences between the best and worst levels of each attribute across all attributes.  (Note that the procedures described above automatically result in zero-centered part-worths within attribute for both sets of part-worths.)  The prior part-worth utilities for levels also included in the pairs questions are multiplied by n/(n+t), where n is the total number of levels used in the Pairs section, and t is the number of pairs questions answered by the respondent.  Any element in the priors that was not included in the Pairs section is not modified.  The pairs utilities are multiplied by t/(n+t).  The two vectors of part-worths (after multiplication by the weights specified above) are added together.  These are the final part-worths, prior to calibration.

 

As a final step, the part-worth utilities are calibrated.  It is widely recognized that the part-worths arising from most conjoint methods are scaled arbitrarily, and that the only real information is contained in the relative magnitudes of differences among them.  So far, that is true of ACA as well.

 

However, the calibration concepts permit scaling of part-worths in a non-arbitrary way.  In any product category, some respondents will be more interested and involved than others.  We attempt to measure each respondent's degree of involvement by asking "likelihood of buying" questions for several concepts that differ widely in attractiveness.  The data obtained from those concepts is useful in three ways:

 

Correlations between part-worths and likelihood responses may be used to identify unmotivated or confused respondents.  Respondents whose likelihood responses are not related to their part-worths should probably not be included in subsequent preference simulations.

 

 

 

The level of likelihood responses may identify respondents who are more or less involved in the product category.  Respondents who give low likelihood responses even to concepts custom-designed for them should probably be treated as poor prospects in simulations of purchase behavior.

 

 

 

Variation in likelihood responses may also identify respondents who are "tuned in" to the product category.  A respondent who gives a low likelihood rating to the least attractive concept and a high rating to the most attractive should be made to respond sensitively in preference simulations, whereas someone who gives every concept similar likelihood values should be made insensitive in simulations.

 

Each respondent is first shown what should be the least attractive possible concept, followed by the most attractive possible concept, as determined from his or her own answers.  Those two concepts establish a frame of reference.  The remaining concepts are of middling attractiveness.  We determine an intercept and one regression coefficient to apply to utilities to best predict logits of likelihood responses.  Those parameters are then used in a final scaling of utilities, which are therefore no longer arbitrarily scaled.  The procedure is as follows:

 

Let:        p   =  the predicted likelihood of buying a concept

 x1 =  the concept's utility based on the final "uncalibrated" utilities

 b1 =  the coefficient used to weight the utilities

 a   =  an intercept parameter

 

The actual likelihood response is a single digit on a scale with n points.  Responses are trimmed to the range of 5 to 95.  Then, using the logit transformation, we model buying likelihood as a function of the respondent's utilities as:

 

ln [ p / (100 - p) ] ~ a + b1x1

 

If the regression coefficient is less than 0.00001, we assume the estimation is faulty and use a conservative positive value (0.00001).  The r-squared (measure of fit) reported in the .UTL file is set to 0 in such cases.  If the calibration concepts section is not included in the interview, the respondent is assumed to have answered 0 and 100 to the worst and best concepts, respectively, and 50 to the other concepts.

 

To calibrate the part-worths, each is multiplied by b1.  The intercept a is divided by the number of attributes, and the quotient added to the part-worth for every attribute level.  The part-worths can be added up and antilogs of their sums are predictions of odds ratios for claimed likelihood of purchase of any concept, just as though that concept had been included in the calibration section of the questionnaire.

 


A Note about Hierarchical Bayes Estimation

 

OLS has been successfully used in ACA calculations for over three decades.  However, a newer technique called hierarchical Bayes estimation provides a more theoretically satisfying way of combining information from priors and pairs.  The results are also usually better from the practical standpoint of improved predictions of holdout questions.  We recommend that the interested reader investigate ACA/HB by downloading the technical paper from our Web site (http://www.sawtoothsoftware.com/download/techpap/acatech.pdf).

 

 

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_estaca.html