Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Constant Sum Chip Allocation - How to chose the best design

Hi All,

1. What are guidelines to preliminarily test the design of Constant Sum chip allocation conjoint? What are the norms? Any papers on this?
Can we use "CBC advanced design test" button in software for this? Should we still be targeting the same 0.025 or <0.05 std. error range in testing results or other norms or strategy should be applied?

Should  incidence of "None of these" be assumed = 0 for testing purposes even if we going to capture & model it some way? If not what should it be?

I am not sure the same testing procedure applied within Lightstudio is equally accurate for both CBC and Chip allocation.

I do understand that chip allocation is the same framework as CBC, but I also see that you get more learning from each screen in Chip allocation than from regular Choice-based conjoint.

Thanks for any thoughts or references or direction!
asked Dec 18, 2017 by furoley Bronze (885 points)
retagged Dec 19, 2017 by Walter Williams

1 Answer

0 votes
I am not sure that the design test uses allocation data to test your design.  The design test is a strong test for identification ("will I get a model with utilities for all levels of all attributes or will it crash?") but the actual standard errors depend on the actual utilities (the test assumes all utilities are zero) so that test is in this way an approximation from the start.  In addition the design test assumes an aggregate logit model when we're usually running HB or latent class MNL instead, so again the test is an approximation to help us make decisions about our design.  

If you really want to test your design to your situation you'll want to generate some realistic data for your design (and for each respondent if you want to run an HB model) and then analyze your generated data set on your design.  Of course this is a lot of work and folks mostly don't want to do it.  

Also, I'm not sure I'd say we get more learning from each screen with an allocation.  While that MAY happen, what also happens is that the model with allocations has more response inconsistency, even within respondent.  That means the utilities tend to be smaller with allocations than with single choices (see the paper by Chrzan and Alberg in the 2001 Sawtooth Software Conference Proceedings, particularly the small table on page 180).
answered Dec 19, 2017 by Keith Chrzan Platinum Sawtooth Software, Inc. (114,400 points)
Thank you Keith. I can only submit random data not "realistic data". But what are the acceptable thresholds for standard error coming from this? 0.05 like for traditional CBC or 0.10 like for MBC?
0.05 like traditional CBC.
Thank you again.  I know chip allocation data is harder to proceed for respondents so quality of responses would be lower and fatigue fits earlier.
With CBC, we always try to be  < 17  choice tasks with rare exceptions up to 22 when that CBC is really easy and enjoyable.
But what are the recommendations for chip allocations? Half of what would think would be good for CBC in similar structure?
It depends so much on details of the design, the respondent task and the respondents themselves that I don't have a blanket rule.  I usually do ask fewer allocation questions than I do single choice CBC questions, though.
Thank you for your help. Really appreciated