Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Checks for Manual CBC design and response bias by 'Generate Data'

Hi Team, Hope you are well and safe. Need some guidance  / clarification, please see below,

I am running a CBC with an alternative specific design. I have a two level attribute (Att 1) which is used for prohibition of levels of another 4 level attribute (Att 2). Similarly, Att1 is used to create alternate specific design with price attributes (6 price attributes due to no. of products involved).

1. Please note due to prohibitions, level 3 of Att2 has frequency 1.5 as other levels of Att2 (this is client requirement), will it bring a lot of bias in the design?

2. Is there a way to keep level of an attribute (Att 1 level 1) constant across some screens, and another level of same attribute to be constant across other screens(Att1 level 2), or is to be done manually? If the frequency of these levels is similar can it bias the design or results?

3. I had to create a manual design due to point 1 and 2 above -

- We are showing equal no. of levels Att1 across versions, tasks and concepts
- We have also ensured that the frequencies of levels of other attributes in the design have similar frequency in the design

The manual design ran with std. errors slightly below on or the edge(<=0.1). We used the 'Generate data' option to get some dummy data before we launch. I did generate data for the same design multiple times, the level importance or Att1 was biased in favor of one level in all instances (Att 1 -  Level 1 35%, Level 2 65%), Is this a random incidence due to the algorithm the option uses or does this throw some light on the bias that may be in the design?

4. Are there any other checks I can do to ensure the manual design is more efficient?
asked May 18 by Prakhar

1 Answer

0 votes
Indeed, to accomplish some of the specific frequency requirements you're looking for, it's going to take some manual massaging of the design.  

Bias is an interesting question and involves mostly psychological processing issues for humans interacting with the design.  Modest prohibitions and modest adjustments to frequencies of level occurrences shouldn't cause much bias; but it depends on how humans react.

Regarding importances of attributes when using randomly generated data, attributes with more levels will show higher "importance"...because "importance" is estimated by looking at the range of utilities for that attribute.  Because there is less precision on each attribute level when there are more levels within an attribute, randomly-generated data will cause the importance score to inflate somewhat for such attributes.  This is another reason that traditional "importance" scores are a bit weird.  I like measuring the impact of attributes in terms of sensitivity simulations in market simulators where such strange anomalies are lessened.
answered May 18 by Bryan Orme Platinum Sawtooth Software, Inc. (189,140 points)
...