Developing a List of Items

Top  Previous  Next

Choice studies such as those supported by the MaxDiff software,

 

MaxDiff

Method of Paired Comparisons (MPC)

Choices from sets of three (triples), sets of four (quads), etc.

 

usually involve at least eight items (i.e. features, or attributes).  With seven items or fewer, a designed choice experiment may risk overkill.  When dealing with so few items and with reasonably sophisticated respondents, perhaps an allocation-based task would be sufficient.  Even so, we could imagine conducting a paired comparison exercise with as few as four or five items, if the demands of the project required a simple respondent task (appropriate for people of all educational and cultural backgrounds) and avoidance of the  response-style bias typically seen with traditional ratings scales.

 

On the high end, we at Sawtooth Software have been involved in a paired comparison experiment that involved 160 employment-related benefits/conditions.  Sample size was in the thousands of respondents, and we pooled information across respondents for estimation.  We're certain other researchers have conducted even larger studies than that.

 


Specifying a List of Items

 

On the Items tab of the MaxDiff Exercise dialog, you can select an existing list to use in the MaxDiff exercise, type a new list by clicking the Add... button, or copy-and-paste a list of items from another program by clicking the Paste list member(s) from the clipboard button pastequestion.  Depending on your license, you can specify up to 2,000 items.  Items can involve either text, graphics, or a combination of text and graphics.

 

 


Recommendations for Item Descriptions:

 

Item text should be clear and succinct. Anything you can do to help respondents process information more quickly and accurately is helpful.

 

When possible, items should be as specific and actionable as possible. For example, it would probably be more useful to describe a level as "Fuel efficiency increased from 5 liters/hour to 4.5 liters/hour" instead of "Improved fuel efficiency."  That said, we realize that such quantitative certainty doesn't make sense for many studies and items. For example, in an image study, we may very well ask respondents to select which picture of an automobile most "Makes me feel successful when I drive it."  How one defines "successful" in a concrete or quantitative way is probably not useful to the goals of the study.

 

Items can be made to be multi-level and mutually exclusive.  For example, in a study of fast-food restaurant features, rather than ask about "Fast service" generically, you might create three separate items that probe specific levels of fast service:

 

Order and receive food within 3 minutes

Order and receive food within 6 minutes

Order and receive food within 10 minutes

 

When considering a fast-food restaurant, it would seem rational that all respondents would prefer faster service to slower service.  Therefore, it makes less sense to include more than one of these levels within the same set.  You could specify prohibitions between these levels (from the Design tab) so that they never appear compared with one another.  It is also possible during estimation to require that faster levels of service receive a higher score than slower levels of service (by specifying monotonicity constraints).

 

Another example of multi-level, mutually-exclusive levels was shown in the award-winning paper at the 2004 Sawtooth Software Conference by our colleague Keith Chrzan.  Using a MaxDiff methodology, he studied price sensitivity for options for automobiles.  Each option was available at multiple prices so he could plot the resulting scores as relative demand curves.  (Of course, the same option never appeared within the same set at different prices.)

 

You may mix multi-level items with single-level items within the same study.  However, we should note that the increased frequency of multi-level attributes in the design might bias respondents to pay more attention to that dimension.  But, we are not aware of any research yet in MaxDiff or MPC that substantiates that concern.

 

Graphics are possible! Remember, Lighthouse Studio lets you use graphics as list items.  

 

It is sometimes helpful to create a "reference" level with monetary meaning.  Some researchers have found it helpful to associate one of the levels with a specific monetary equivalent.  For example, in studying job-related conditions and benefits, it might be useful to include a level that says: "Receive an immediate $500 bonus."  Or, if studying improvements to a product, we might include a level that says: "Receive a $20 off coupon."  In both cases, we can associate a specific item score with a specific and immediate monetary gain.  The scores for other levels may then be compared to this monetary-based reference level (the reference item's scores could also be subtracted from all other scores, to anchor the scores around the 0-based reference point).  Also note that it is possible to make the monetary reference point multi-leveled with mutually exclusive levels, such as "$5 off coupon," "$10 off coupon," "$15 off coupon," "$20 off coupon."  This provides multiple monetary-grounded reference points for comparing the scores of other items in the study.

 

Status Quo level to establish reference point (anchor).  In other studies, such as those studying potential modifications to existing products, services, or (to be more specific) employment conditions, it might make sense to include a reference level reflecting no change; for example, "No change to current work environment."  That way, item scores that might have a negative affect (relative to "no change") can be identified.  The score for the status quo item could be subtracted from the other items' scores, to anchor the scores around the 0-based reference point.

 


Using Constructed Lists in MaxDiff Experiments

 

Warning: This is an advanced feature that if not used carefully can result in errors in data collection and incorrect model results.  Please test your survey thoroughly prior to fielding the study.  After collecting data, analyze the data via Counts to ensure that all items from the parent list have been shown a reasonable number of times to the sample of respondents.  If some items were never shown to any respondents, they will be marked as having been shown zero times (with an asterisk) and estimation using HB will be incorrect unless these items are omitted from the analysis.

 

Lighthouse Studio can build constructed (dynamic) lists based on answers to previous questions or some other rule you define (such as randomization or always forcing a client's brand onto a list).  You may use the constructed list in later questions or even in MaxDiff exercises.  When at the items tab on the MaxDiff Exercise dialog, you may select a constructed list rather than the typical pre-defined list used in MaxDiff experiments.  This powerful option even allows for the inclusion of respondent-supplied other-specify text as an item in the MaxDiff exercise.

 

The key limitation for using constructed lists is that all constructed lists for respondents must have the same length.  Lighthouse Studio requires you to use SETLISTLENGTH as the last instruction in the constructed list logic used in a MaxDiff list.  The Number of Items (Attributes) specified on the Design tab must match the number of items specified in the SETLISTLENGTH instruction.

 

Lighthouse Studio uses the design generated from the Design tab, but draws items from the constructed list according to the design by position on the constructed list.  The values stored for analysis (both for the design shown to respondents and for the answer given) correspond to the original pre-defined list.

 

Warning: the design report displays frequencies for a design having the same number of items as specified in the SETLISTLENGTH instruction and in a fixed order.  However, each respondent's actual items may vary, and therefore the true frequency of occurrences for each parent item and co-occurrences of items may differ significantly from that shown in the design report.

 

Express MaxDiff is a term that refers to using constructed list logic to draw a random subset (say, 30 items) from a much bigger pre-defined list of items for inclusion in each respondent's MaxDiff exercise.  Across all respondents, all items will have been seen. But, in this example, each respondent never has to consider more than 30 items.

 

When estimating item scores, MaxDiff estimates (via HB) a full set of item scores for all items in the pre-defined list, even if each respondent received only a customized subset of the items.  These scores for items not seen take on values near population averages, meaning we assume these items were dropped "at random".  Depending on your reason for using a constructed list that is a subset of the pre-defined list, this may or may not be a desired outcome.

 

A clever approach for dealing with "dropped" items (not at random) from a constructed list using the direct anchoring method is available.  Imagine you asked respondents a preliminary question regarding which of a very large number of items was most preferred or more important.  You then created a constructed list of the 30 most important items to ask in a MaxDiff task.  You could use the direct anchoring approach to easily inform utility estimation for each respondent regarding the fact that the dropped items were less preferred than the 30 included items.

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_maxdifflistofitems.html