Staying out of Trouble with ACA

Top  Previous  Next

Here are some hints for staying out of trouble with ACA:

 

Proper execution of the Importance questions: Over the last 10 years, it has become clear to us that the self-explicated importance question is the aspect of ACA that either "makes or breaks" it.  If the importance questions are asked poorly, they can misinform utility estimation and be harmful.  An example of poor execution is to not educate respondents ahead of time regarding the array of attributes, and to ask the importance questions one-by-one on separate screens.  Some ideas for improving the quality of the importance questions are given in the section of this documentation entitled Customizing the Importance Question.

 

Using too many prohibitions: ACA lets you specify that certain combinations of attribute levels shouldn't occur together in the questionnaire. But if you prohibit too many combinations, ACA won't be able to produce a good design, and may fail altogether. You can present combinations of levels that do not exist in the market today, and including unusual combinations can often improve estimation of utilities. During market simulations, you can avoid those combinations that seem unusual. In short, prohibitions should be used sparingly.

 

Reversing signs of ordered attribute levels: If you already know the order of preference of attribute levels, such as for quality or price, you can inform ACA about which direction is preferred and avoid asking respondents those questions. You inform ACA by means of the a priori settings: Worst to Best, or Best to Worst. A common mistake is to accidentally specify the wrong order, which can lead to nonsensical data that can be difficult to salvage. To avoid this situation, take the interview yourself, making sure that the questions are reasonable (neither member of a pair dominates the other on all included attributes). Also, answer the pairs section with mid-scale values and then check to make sure the utilities are as you expect them. If you do happen to misspecify the a priori order of levels, the ACA/HB module can be used quite effectively to recompute the utilities and help salvage the situation.

 

Using ACA for pricing research when not appropriate: There are three aspects to this point.

 

All "main effects" conjoint methods, including ACA, assume that every product has the same sensitivity to price. This is a bad assumption for many product categories, and CBC or ACBC may be a better choice for pricing research, since they can measure unique price sensitivity for each brand.

 

 

When price is just one of many attributes, ACA may assign too little importance to it. In a few previously published articles, researchers have reported that it may sometimes be appropriate to increase the weight that ACA attaches to price. This is particularly likely if the author includes several attributes that are similar in the minds of respondents, such as Quality, Durability, and Longevity. If redundant attributes like these are included, they may appear more important in total than they should be, and other attributes, such as price, may appear less important than they really are. This problem is exacerbated if a wide range for price is specified.

 

 

It is not a good idea to use the "share of preference with correction for product similarity" with quantitative variables such as price. Suppose there are five price levels, and all products are initially at the middle level. As one product's price is raised, it can receive a "bonus" for being less like other products which more than compensates for its declining utility due to its higher price. The result is that the correction for product similarity can lead to nonsensical price sensitivity curves. This problem also can occur (but typically to a lesser degree) when using the improved method for dealing with corrections for product similarity: Randomized First Choice (RFC). We suggest conducting sensitivity analysis with the Share of Preference method when modeling demand curves for quantitative attributes like price.

 

Using unequal intervals for continuous variables: If you use the ranking rather than the rating option, ACA's prior estimates of utility for the levels of each attribute have equal increments. That works well if you have chosen your attribute levels to be spaced regularly, for example with constant increments such as prices of .10, .20, .30, or proportional increments such as 1 meg, 4 megs, or 16 megs. But if you use oddly structured intervals, such as prices of $1.00, $1.90, and $2.00, ACA's utilities are likely to be biased in the direction of equal utility intervals. This problem can be avoided if you use the ACA/HB module to compute ACA utilities.

 

Including too many attributes: ACA lets you study as many as 30 attributes, each with up to 15 levels. But that doesn't mean anyone should ever have a questionnaire that long! Many of the problems with conjoint analysis occur because we ask too much of respondents. Don't include n attributes when n-1 would do!

 

Including too many levels for an attribute: Some researchers mistakenly use many levels in the hope of achieving more precision. With quantitative variables such as price or speed, you will have more precision if you measure only 5 levels and use interpolation for intermediate values. If you must measure more than 5 levels, we strongly encourage you to use the ACA/HB module for estimating utility values. It can do a much better job at measuring those levels that are not studied in detail by any one respondent.

 

Interpreting simulation results as "market share": Conjoint simulation results often look so much like market shares that people sometimes forget they are not. Conjoint simulation results seldom include the effects of distribution, out-of-stock, or point-of-sale marketing activities. Also, they presume every buyer has complete information about every product. Researchers who represent conjoint results as forecasts of market shares are asking for trouble.

 

Not including adequate attribute ranges: It's usually all right to interpolate, but usually risky to extrapolate. With quantitative attributes, include enough range to describe all the products you will want to simulate. It is a good idea prior to data collection to ask the client to list the product scenarios that should be investigated in market simulations. This exercise can often reveal limitations or oversights in your attribute level definitions.

 

Imprecise attribute levels: We assume that attribute levels are interpreted similarly by all respondents. That's not possible with "loose" descriptions like "10 to 14 pounds," or "good looking."

 

Attribute levels not mutually exclusive: Every product must have exactly one level of each attribute. Researchers new to conjoint analysis sometimes fail to realize this, and use attributes for which many levels could describe each product. For example, with magazine subscription services, one might imagine an attribute listing magazines respondents could read, in which a respondent might want to read more than one. An attribute like that should be divided into several, each with levels of "yes" and "no."

 

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?stayingoutoftroublewithac.html