Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

The right ACBC settings for Screener and Choice Tournament

Dear community,

In my ACBC study with 7 attributes I have applied the following settings as per documentation:

# screening tasks: 7
# concepts per task: 4
# unacceptables: 4
# must-haves: 3
Max. # of options in Choice Tournament: 16
# choice task concepts: 3

So far the feedback from respondents is that it gets quite a cognitive burden after a couple of Screener tasks.
Is it recommended to e.g. decrease the # of screening tasks? I'm worried that respondents will give up and start to randomly select options as it gets too complex with 7 attributes.
I also noticed that so far nobody ever made it to must-have questions 2 and 3 - and even must-have question 1 very rarely appeared in the beta sample.
asked Feb 15 by danny Bronze (1,260 points)

1 Answer

0 votes
 
Best answer
These are excellent questions, because so much depends on the kind of interest/engagement your respondents have in the choice task.  If you're struggling to get engagement, then perhaps you need to give up some precision at the individual level (via a shorter ACBC survey) in favor of not inviting respondents to answer sloppily just to finish the survey.

A lot depends on whether you feel your sample size is ample enough to field a sparser/quicker ACBC survey.  For example, if you had 800 respondents, you could feel much better about reducing the information content and reducing the statistical precision at the individual level compared to if you have only 100 respondents.

Regarding the final section, the "purchase likelihood" questions.  You only need to ask those if you need to recalibrate the "None" utility to be tied to some threshold point on the 5-pt purchase intent scale.  Many (most?) people don't bother with that final section.
answered Feb 15 by Bryan Orme Platinum Sawtooth Software, Inc. (184,140 points)
selected Feb 24 by danny
Thanks a lot for your comment Bryan!

I am indeed not using the calibration section. But even then the survey seems a bit tiring for respondents. In my case 200-300 respondents will be the target so I am not quite sure whether I can give up precision.

I just ran 2 Test Designs with the following settings and outcomes:

1) n=250 with 7 screener tasks, 4 unacceptables and 3 must-haves [Original settings as per documentation]
--> min. times level was shown: 6
--> Standard errors of all part-worth utilities <0.04

2) n=250 with 6 screener tasks, 3 unacceptables and 2 must-haves
--> min. times level was shown: 4
--> Standard errors of all part-worth utilities <0.04 (same)

So going with standard settings (7 screener tasks) seems to be over-engineering? The levels were all shown 6 times at a minimum, but >3 is recommended. So reducing the # of screening tasks to 6 still produces great results (minimum times level was shown is 4). I will also experiment and reduce the # of choice tasks by 1 to see whether the rule of thumb with at least 3 levels shown also works.

Does that make sense? As long as std. errors are fine and levels are shown at least 3 times, even with n=250, doing those changes to # of screener and tournament tasks should be fine?
(Note: I will analyze the real data with HB estimation as recommended)
I further changed the settings by reducing again:

n=250 with 5 screener tasks, 2 unacceptables and 1 must-haves, max. number of concepts in choice tournament: 12, 3 concepts per task

--> min. times level was shown: 4
--> Standard errors of all part-worth utilities <0.04 (same)

So even with those settings std. error is fine and min. times level was shown is 4. Can I go ahead with those settings then?

Just a bit confused that they are not in accordance with the recommendation given for 7 attributes in the documentation.
Recommendations we give in the documentation don't mean that is the only way to have a successful study.  As a software company that people look to for authoritative recommendations, we err on the side of caution when suggesting these defaults (on the side of getting more data than less).  

How thin the data are critically depend on how many levels per attribute you have, and when you say that your test design shows each level at least 4 times to each respondent in ACBC (the non-BYO levels), then this again shows you are in a good position with your settings.  We recommend each level be shown at least 2x and preferably 3x per respondent for the non-BYO levels.

And, given your sample size you are seeing standard errors from the test design report with aggregate logit estimation of .04 or less even in the slightly shorter interview case.  So, that again shows you are in good shape relative to our recommendations in our trainings, which is to look for 0.05 or less on standard errors from aggregate logit.
...