Designing the Study

Top  Previous  Next

The Design Tab presents a number of options to control your study design:

 

Number of Items (Attributes)

 

In this field, you see the number of items (i.e. levels or features) to include in the exercise.  This field is not editable, but is computed by examining the current list you have specified on the Items tab  Depending on your license, you can specify up to 2,000 items on the Items tab.  

 

Number of Items per Set (Question)

 

Generally, we recommend displaying either four or five items at a time (per set or question) in MaxDiff questionnaires.  However, we do not recommend displaying more than half as many items as there are items in your study.  Therefore, if your study has just eight total items, we would not recommend displaying more than four items per set.

 

If you would like to use the Method of Paired Comparisons approach, then specify just two items per set.

 

Research using synthetic data suggests that asking respondents to evaluate more than about five items at a time within each set may not be very useful in MaxDiff studies.  The gains in precision of the estimates are minimal when using more than five items at a time per set.  The small statistical gains from showing even more items may be offset by respondent fatigue or confusion.

 

Number of Sets (Questions) per Respondent

 

We generally recommend asking as many sets (questions) per respondent such that each item has the opportunity to appear from three to five times per respondent.  For example, consider a study with 20 items where we are displaying four items per set.  With 20 total sets, we know that each item will be displayed 4 times (assuming the design is perfectly balanced).  This leads to the following decision rule and formula:

 

For best results (under the default HB estimation used in MaxDiff), the suggested number of sets is at least:

 

 3K/k

 

 where K is the total number of items in the study, and k is the number of items displayed per set.

 

The software will warn you if the number of sets you have requested leads to each item being displayed fewer than 2 times per respondent.  (Note: there are instances where researchers purposefully field sparse MaxDiff designs because they don't need high-resolution individual-level scores for the many items in their study but can achieve their goals at the segment or population level via logit or latent class estimation.)

 

The recommendation to show each item at least 3x per respondent assume HB estimation (the default method in MaxDiff).  Advanced researchers may decide to use logit or latent class estimation (available within the Analysis Manager area) instead.  

 

When pooling data across respondents using Latent Class or aggregate logit, with large enough sample sizes you may be able to achieve quite reasonable population-level scores even though each respondent has seen each item just 1 time or even fewer across MaxDiff sets.  

 

A rule of thumb for pooled analysis of MaxDiff data is that each item should be seen by the segment or population of interest at least 500 times and preferably 1000 times.

 


Generate Design

 

When you are pleased with your settings, click Generate Design to produce a design (which is stored internally).  (You can export the design to a .csv file if you wish using the Export Design... button described at the end of this section.)

 

The design algorithm follows these guidelines:

 

1.Create a design that features item frequency balance. A balanced design is one in which the one-way frequencies are nearly equivalent (how many times each level appears across the entire design) and two-way frequencies are also nearly equivalent (how many times each pair of items appears within the same set across the entire design).  When one-way and two-way frequencies are balanced, this means the design is both level balanced and orthogonal.

 

2.Ensure that each version of the design (respondent questionnaire) features connectivity (meaning that all items are linked either directly or indirectly), unless the Allow Individual Designs Lacking Connectivity option is checked. Without connectivity it becomes very difficult for the default estimation procedure in MaxDiff (HB) to scale the items properly relative to one another for each respondent.
3.After the designs have been generated based on the guidelines in 1 and 2, swap the order of the items within each set so that each item appears approximately an equal number of times in each position. Positional balance reduces or virtually eliminates order bias. Finally, randomize the order of the tasks within each version.

 

The design process is repeated 1000 separate times and the replication that demonstrates the best one-way balance is selected. If multiple designs have the same degree of one-way balance, then we select among those designs based on the best two-way balance. If multiple designs have the same degree of one-way and two-way balance, then we select among those designs based on the best positional balance. With small to moderate sized designs and no prohibitions, this usually happens within a few seconds.

 


Test Design

 

If you have already generated a design, you may at a later point want to review the properties of the design in terms of:

 

Number of versions used

Design seed used

One-way and two-way frequencies

Positional balance

 

Click Test Design to produce a report showing the characteristics of the design currently being used for your MaxDiff exercise.

 


Advanced Settings

 

Note: it is not necessary to modify the default settings in this section to achieve good results.  The default recommendations and guidelines for MaxDiff should work well for most situations.  However, researchers wanting to fine-tune their models or exert greater control may find the following advanced settings useful.

 

Number of Versions

 

Though it is possible to estimate scores relatively efficiently using just a single questionnaire version for all respondents, there is practical benefit to using multiple versions (sometimes called "blocks") of the questionnaire.  With multiple versions of the questionnaire, different respondents see different series of questions.  Across respondents, this dramatically increases the variation in the way items are combined within sets, which can reduce potential context biases (which are usually modest).

 

The default is 300 questionnaire versions, which ensures a very large overall design and a great deal of combinatorial variation in the sets.  Using 300 versions is more than adequate, since beyond a handful of questionnaire versions the further practical benefits of increased variation are barely detectable.  Since using Lighthouse Studio makes it nearly automatic to produce and use multiple versions of the questionnaire, it makes sense to use a large number.  The software allows you to request up to 999 questionnaire versions.  The two main take-aways are, 1) it makes sense to use many questionnaire versions, 2) it is not necessary for each respondent to receive a unique questionnaire version.

 

If using the separate MaxDiff designer for conducting paper-and-pencil studies, we recommend using at least four different versions of the questionnaire.  You need to find a good balance between difficulty of study administration and design quality.  Another key point to remember is that it isn't necessary for each version of the questionnaire to be completed by an equal number of respondents; each version alone typically has excellent level balance and orthogonality.

 

Number of Iterations

 

By default, the MaxDiff designer repeats its algorithm 1000 times and returns the best design found across those passes. The quality of the design is based on the one-way, two-way, and positional balance (in that order of precedence). For many designs, 1000 iterations can be done within a few seconds and is usually perfectly adequate to achieve an excellent design.  Up to 999,999 iterations may be specified.

 

Design Seed

 

Specify a value from 1 to 999,999,999. This integer (seed) is used for determining the starting point for the design algorithm. Different seeds will yield different designs, all having approximately the same overall design efficiency.  You can try different starting seeds to see if you obtain slightly better designs, but this typically isn't necessary to achieve excellent results.

 

Allow Individual Designs Lacking Connectivity

 

Connectivity is the property that the items cannot be divided into two sets wherein no comparison is made between any item in one set and another item from the other. Even if creating choice sets randomly, if respondents receive enough sets, it would be difficult for the items to lack connectivity for any given respondent. However, if you select a design with relatively many items and relatively few tasks per respondent, many individual versions of the questionnaire could lack connectivity. The default for the software is to reject any version (questionnaire for one respondent) that lacks connectivity. But in the case of many items and few sets per respondent, it may become impossible to satisfy connectivity. If you would like to permit versions lacking connectivity (which may be the case if you ask few questions of any one respondent, probably with the focus of aggregate analysis), then you can check this box.  Make sure to specify many questionnaire versions so that connectivity is established across the versions.  To ensure stable estimation, advanced users may wish to generate dummy (random) response data (Test | Generate Data), analyze the test data using logit, and examine the size of the standard errors to ensure reasonable precision.

 

Prohibitions

 

Occasionally, your study might include multi-level items characterizing degrees or amounts of some feature.  For example, in studying features of fast food restaurants, you may wish to include the following three items in your study list:

 

Order and receive food within 3 minutes

Order and receive food within 6 minutes

Order and receive food within 10 minutes

 

When considering a fast-food restaurant, it would seem rational that all respondents would prefer faster service to slower service.  Therefore, it makes less sense to include more than one of these levels within the same set.  You could specify prohibitions between these levels by clicking the Prohibitions... button so that they never appear compared with one another.

 

For those familiar with conjoint analysis it will be heartening to learn that prohibitions are not as damaging in MaxDiff experiments as with conjoint analysis.  For example, in a conjoint study where there are 4 prohibitions specified between two 4-level attributes, 4 of the possible 16 combinations (or 25%) are prohibited.  Depending on the pattern of prohibited combinations, this could be very detrimental.  However, in a typical 20-item study, there are 1/2(20*19) = 190 possible combinations of items (taken two at a time).  Prohibiting 4 of these represents only about 2% of the possible combinations.  This will have very little impact on the overall results.  Please see Robustness of MaxDiff Designs to Prohibitions for more details.

 

When you compute the Generate Design button, if the software returns warnings that the questionnaire versions lack connectivity, this could be a clear sign that your prohibitions are excessive (after ruling out the possibility that you are not displaying enough sets per respondent).

 


Importing a Design from an External File

 

Some advanced researchers may wish to import their own design(s) from an external text file using the Import Design... button.  To demonstrate the required file format, consider a MaxDiff exercise with the following specifications:

 

20 items in the exercise

4 items per set

20 sets per respondent

2 versions of the questionnaire

 

The design must have the following comma-delimited format (the header row with labels is optional):  

 

Version,Set,Item1,Item2,Item3,Item4

1,1,10,1,19,7

1,2,19,17,15,3

1,3,17,10,16,6

1,4,6,20,12,13

1,5,13,2,9,5

1,6,2,9,13,11

1,7,15,18,1,14

1,8,7,3,20,8

1,9,5,6,18,9

1,10,3,12,2,16

1,11,8,3,10,4

1,12,15,7,5,2

1,13,11,16,14,19

1,14,4,19,11,12

1,15,14,11,17,8

1,16,20,15,13,4

1,17,9,1,17,18

1,18,18,8,16,6

1,19,1,5,12,20

1,20,14,4,7,10

2,1,4,11,6,15

2,2,10,13,14,18

2,3,20,10,9,1

2,4,11,20,8,5

2,5,16,13,5,7

2,6,12,19,7,17

2,7,12,14,8,17

2,8,13,18,3,2

2,9,18,7,4,11

2,10,17,4,15,16

2,11,5,14,10,15

2,12,19,2,3,8

2,13,2,15,17,20

2,14,16,12,5,9

2,15,4,2,19,10

2,16,6,11,14,3

2,17,8,7,4,6

2,18,3,1,11,19

2,19,1,9,16,20

2,20,12,9,2,14

 

When you import a design from an external file, the designs are stored within the system as if the MaxDiff designer had generated them.  You can click Test Design from the Design tab to compute one-way, two-way, and positional frequencies for the design you supplied through the import process.  

 

During data collection, each new respondent entering the survey is assigned the next questionnaire version in the design.  Once all questionnaire versions have been assigned to respondents, the next respondent receives the first design, etc.

 


Exporting the Current Design

 

Some users may wish to examine their designs or use the design details to create paper-and-pencil questionnaires.  You can export the current design to a comma-delimited format with the layout as described in the previous section by clicking Export Design....  You could also modify that exported file for your purposes and re-import the design into your MaxDiff project (make sure to test the design thoroughly to ensure that your customized design leads to high precision estimates of item scores).

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_maxdiff_numberofsets.html