Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Questions about test design and cbc/hb

Hello,

i have created the following CBC with alternative-specific design:

One primary attribute with two levels
One common attribute with five levels (different prices)

each level of the primary attribute has three conditional attributes:
one conditional attribute with five levels
one conditional attribute with three levels
one conditional attribute with two levels

All in all my design is:
Task generation method is 'Balanced Overlap' using a seed of 1.
Based on 300 version(s).
Includes 3000 total choice tasks (10 per version).
Each choice task includes 3 concepts and 8 attributes.

Question 1: Are there any significant errors/risks up to this point?

The test report shows almost the same freq within each attribute with values close to and well over 1000. But the following warning is displayed:

**Warning: You have specified prohibitions/alternative-specific rules between two or more attributes.  You cannot automatically estimate an interaction effect between two attributes when a prohibition or alternative-specific rule is in place.

Question 2: Is this warning common for alternative-specific designs? Or did I make a mistake?

The Advanced Test Design looks like:

Logit Efficiency Test Using Simulated Data
-------------------------------------------------------------
Main Effects: 1 2 3 4 5 6 7 8
Build includes 300 respondents.

Total number of choices in each response category:
Category   Number  Percent
-----------------------------------------------------
       1     847   28.23%
       2     868   28.93%
       3     865   28.83%
       4     420   14.00%

There are 3000 expanded tasks in total, or an average of 10.0 tasks per respondent.


Iter    1  Log-likelihood = -4054.29664  Chi Sq = 209.17289  RLH = 0.25887
Iter    2  Log-likelihood = -4043.37506  Chi Sq = 231.01605  RLH = 0.25981
Iter    3  Log-likelihood = -4042.63996  Chi Sq = 232.48624  RLH = 0.25988
Iter    4  Log-likelihood = -4042.60526  Chi Sq = 232.55565  RLH = 0.25988
Iter    5  Log-likelihood = -4042.60382  Chi Sq = 232.55853  RLH = 0.25988
Iter    6  Log-likelihood = -4042.60376  Chi Sq = 232.55865  RLH = 0.25988
Iter    7  Log-likelihood = -4042.60376  Chi Sq = 232.55866  RLH = 0.25988
*Converged


          Std Err    Attribute
  1       0.02083    1 1
  2       0.02083    1 2

  3       0.04400    2 1
  4       0.04497    2 2
  5       0.04414    2 3
  6       0.04412    2 4
  7       0.04449    2 5

  8       0.06481    3 1
  9       0.06410    3 2
 10       0.06437    3 3
 11       0.06445    3 4
 12       0.06585    3 5

 13       0.04514    4 1
 14       0.04509    4 2
 15       0.04503    4 3

 16       0.06624    5 1
 17       0.06423    5 2
 18       0.06409    5 3
 19       0.06475    5 4
 20       0.06376    5 5

 21       0.04526    6 1
 22       0.04464    6 2
 23       0.04447    6 3

 24       0.03106    7 1
 25       0.03106    7 2

 26       0.03117    8 1
 27       0.03117    8 2

 28       0.05263    NONE

The CBC/HB estimation with dummy-data (300 respondents; 20000 total iterations) looks very weird. The graphs are all around 0 mean beta, Pct. Cert. is 0.214 and RLH is 0.336.

Question 3: But this is due to the uniform distributed dummy data or is there an error here that can also happen to me with real data?

Thank You in Advance
asked Oct 10, 2019 by Malte

1 Answer

+1 vote
Malte,

That's a common warning for designs, since some interactions (e.g. between conditional attributes) may not be estimable.  Your design test looks good and I see no red flags in it to suggest any problems.  

HB estimation with dummy data usually looks strange especially with random or uniform data, so don't worry about that.
answered Oct 10, 2019 by Keith Chrzan Platinum Sawtooth Software, Inc. (95,775 points)
...