Design Tab--Test Design (ACBC)

Top  Previous  Next

This functionality lets you generate multiple test (dummy) respondents who answer the ACBC questions randomly.  This is valuable, for multiple reasons:

 

1.  The rule of thumb for ACBC (to achieve a high degree of precision at the individual level) is to generate enough near-neighbor product concepts in each respondent's  design such that each level is seen (in the core set of near-neighbor concepts) at least two times and preferably three times per respondent (assuming you are using the recommended approach of BYO + Screeners + Choice Tasks).  The Test Design tool summarizes the number of times each level appears across the product concepts so you can easily check this.  As few as five test respondents will give you the needed information.

 

Any levels displayed to a respondent fewer than 2 times are coded in red to warn you that the design is sparse.  Any levels displayed 2 times are coded in yellow, as a moderate caution.  If you are interviewing 100s of respondents and are willing to sacrifice some precision at the individual level in favor of shorter questionnaires, you may decide that fielding a sparse design is perfectly suitable for your situation.

 

2.  Sometimes you'll need to add prohibitions between attributes (so that certain levels of one attribute cannot display with certain levels from a different attribute).  When this happens, you'll experience at least some reduction in the precision of utility estimates.  The Test Design tool reports the standard errors (from aggregate logit) both with and without the inclusion of the prohibition(s) so you can understand the percent change in efficiency due to prohibitions.  You'll need to use typically 500 or more test respondents to get a reasonably stable read on design efficiency with and without prohibitions, which can take more than 20 minutes to generate, depending on the speed of your machine.  Please note, that due to the nature of having random responders answer adaptive ACBC questionnaires, where the adaptive designs oversample BYO-selected levels, the stability of the estimated standard errors using these robotic respondents is not as robust as with similar tests for non-adaptive CBC designs.

 

Note: prohibiting just one level from one attribute from appearing with one level from a different attribute leads to so little loss in design efficiency that you shouldn't  preoccupy yourself with trying to test how much loss in precision is associated with that one prohibited combination.

 

3.  Sometimes you'll want to test the precision of the utility estimates under different questionnaire settings (especially changing the number of near-neighbor concepts in a respondent's design).  You can compare the estimated standard errors for different questionnaire settings, holding the number of test respondents constant.  You'll need to use typically 500 or more test respondents to get an accurate read in this case, which can take around 30 to 45 minutes to generate.

 

Notes: ACBC's designer works as long as it can with the available computing resources (1 second or less per respondent) to generate designs with high relative D-efficiency.  This means that if you use the same questionnaire setup and repeat a Test Design run (on the same or a different machine), you will often see different results.

 

If you use constructed lists, these are ignored for Test Design and all attributes and levels are taken into the ACBC survey.  If you want to test the design with constructed lists implemented, then you should use the Data Generator capability to generate the test records and then analyze the data using other means.)

 

 


Test Design Reports

 

The Test Design report has four additional tabs that you may investigate.

 

Level Counts tab The Test Design tool summarizes the number of times each level appears across the product concepts (in the core set of near-neighbor concepts).  As few as five test respondents will give you the needed information.  Any levels displayed to a respondent fewer than 2 times are coded in red to warn you that the design is sparse.  Any levels displayed 2 times are coded in yellow, as a moderate caution.  If you are interviewing 100s of respondents and are willing to sacrifice some precision at the individual level in favor of shorter questionnaires, you may decide that fielding a sparse design is perfectly suitable for your situation.

 

Note: when Avoid Dominated Concepts is enabled (as it is by default), it can often make it very difficult for the algorithm to find designs with target level balance.  For some situations, if you are finding it difficult to obtain designs with a reasonable degree of level balance (for non-BYO selected levels), you might decide to turn off the Avoid Dominated Concepts capability.  There is a tradeoff between level balance, D-efficiency, and avoiding dominated concepts.  Avoiding dominated concepts usually leads to designs with lower D-efficiency and worse level balance.  You can use the Test Design capability to compare the precision of utility estimates (standard errors) if avoiding dominated concepts vs. not.

 

Standard Errors tab  This report shows the standard errors from pooled logit (across all robotic respondents) for each of the attribute levels.  All sections of your ACBC questionnaire, including replacement concepts are used in this analysis.  If you have specified any prohibitions, then standard errors are shown for your current questionnaire as well as for a questionnaire that does not include any prohibitions.  This allows you to see how much relative efficiency is lost per attribute level due to the prohibitions you've specified. Note that it requires about 500 "robotic" respondents or more to obtain decent stability for the standard errors report! Running 500 respondents might take 20 minutes or more to generate, depending on the speed of your machine.  For even more precision in this report, use 1000+ respondents.

 

Note that if you are using Summed Price, we assume linear price estimation.

 

Warnings tab  For each robotic respondent and each design generation "pass" (we typically generate multiple designs for each respondent, then select the best one in terms of D-efficiency), we report whether there were any issues that prohibited the process from accomplishing all the specified goals.  For example, the prohibitions you may have specified or the request to avoid dominated concepts may make it impossible for the designer to satisfy the goals of level balance and orthogonality.  In such cases, certain goals may need to be relaxed to allow the software to generate a valid questionnaire for the respondent.  Most commonly, the warning may be reported as "Design optimize timed out.  Proceeding with current concepts."  That message usually means that for a "pass" in the designer, it wasn't able to do all the swapping and relabeling that could possibly be investigated, because it ran out of time (we don't want to make the respondent wait too long to see the next question in the questionnaire). Even if the swapping and relabeling steps are terminated early, the quality of the design should still be very good. Breaking out of a "pass" early typically leads to designs that are 98% as good as designs where the swapping and relabeling iterations were completed.  The first few swaps and relabelings usually lead to the largest gains; later ones typically yield tiny gains.

 

D-Efficiency tab  For each robotic respondent, we compute the D-efficiency of the experimental design as if it were a traditional full-profile card-sort conjoint analyzed via OLS (where respondents saw each product concept just once and provided a rating for each).  This approach to computing D-efficiency is the same as employed in our CVA software for traditional full-profile ratings-based conjoint.  The D-efficiency if including the BYO tasks as additional rows in the design matrix is shown, followed by the D-efficiency if only using the experimental design for the core product concepts (as with traditional full-profile conjoint), not including any replacement cards.  BYO concepts involve one concept per attribute level, coded as an extreme partial-profile concept where the attribute level in question is coded in the design matrix (and the other attributes are coded as zeros).  An exception is when level prices are associated with an attribute for use within summed price, where the BYO concepts are coded to reflect any price tradeoffs among its levels.

 

Note: because of the way ACBC generates "near-neighbor" concepts to each respondent's BYO-specified ideal (oversampling BYO-selected levels), the D-efficiency of ACBC's array of product concepts is lower than for traditional full-profile arrays that are level-balanced and orthogonal. This is expected and many research studies comparing ACBC to more traditional D-optimal designs have shown that the ACBC process leads to part-worth utilities with typically greater precision than the D-optimal approach (e.g. non-adaptive CBC).  The benefits of the adaptive ACBC process seem to outweigh the losses due to using less statistically efficient designs.

 

Individual Data tab   Each robotic respondent's answers to the questionnaire are reported here.

 

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_designtab_2.html