Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Interaction effects in ACBC HB, Estimate Task-Specific Scale Factors, and Graph in ACBC HB

Dear Sawtooth community,

I have 3 questions about HB running with ACBC data.

1. I've runned an HB before without interaction effects and I exported utilities (hbu) and got a file containing individual-level main effects from this HB. Then I used the modified 2-Log Likelihood test using this file to decide if I need to include interactions in the final HB analysis.

Here is the result (I have 5 attributes in the ACBC):
Run                        2LL P-Value for Interaction Effect      Gain in Pct. Cert. over Main Effects
Attribute 1 * Attribute 3       0.00000                            0.36%
Attribute 2 * Attribute 3       0.00000                            0.30%
Attribute 1 * Attribute 2       0.00000                            0.30%
Attribute 2 * Attribute 4       0.00000                            0.20%
Attribute 1 * Attribute 4       0.00000                            0.20%
Attribute 3 * Attribute 4       0.00009                            0.12%
Attribute 2 * Attribute 5       0.00899                            0.09%
Attribute 1 * Attribute 5       0.08226                            0.06%
Attribute 3 * Attribute 5       0.20223                            0.04%
Attribute 4 * Attribute 5       0.22119                            0.04%

All the gains in Pct. Cert over main effects are under 1%. Does this mean I don’t need to include interaction effects in the final HB analysis? I’m just concerned about the negative impacts on the results if I include interaction effects in the final HB running.

2. In the HB settings, do you recommend selecting Estimate Task-Specific Scale Factors (Otter’s method)? And if I select Otter’s method, do I need to active "Save Random Draws" (I just found this from the help file)?

3. In the HB settings, under Graph this section, the "Graph Population Exponential Moving Averages" should be selected, right? Also, is it appropriate to specify the starting seed as 1?

I’m new to HB estimation and thanks in advance!
asked Feb 25, 2021 by Xinlei Hu (360 points)

1 Answer

0 votes
I guess it's too late...

1. Yes no need to include interaction effects as they are all too marginal (<1%)

2. If your sample size is <300 I would not recommend to user Otter's method

3. Seed can be any number. It is just important to know the number when you aim to reproduce the results later.
answered May 17, 2021 by slimjumbo (220 points)
Many thanks for your answers to these questions. Regarding Otter's method, I learn from help files that it is better to increase the number of used iterations. What is the appropriate number of  iterations  (including number of iterations before using results and number of draws to be used for each respondent) when using Otter's method? Thanks in advance.
So much depends on the stability of your dataset.  If your dataset is stable (no prohibitions, reasonable sample size, you followed the recommendations of the software for number of screener questions and number of choice tournament questions to ask in ACBC), then 20K initial burn-in iterations followed by 20K used iterations should be fine.  However, if you want to be extra cautious (and you have the time to wait), then 100K initial iterations followed by 100K used iterations should be plenty for most datasets.
By the way Bryan, the automated interaction search for studies with HB estimation has been widely discussed in other threads. You mentioned that "the interaction effects observed in aggregate analysis are usually due to unrecognized heterogeneity.  Once we recognize heterogeneity via the HB model, then the interactions tend to not be useful anymore."

But in this case the significant interaction effects all have a marginal increase in the Pseudo R² (<1%), meaning that Xinlei could still use it as a confirmation that no interaction effects need to be included in the HB estimation? My simple thinking would be that if the interaction search results show that no interaction effect is required based on aggregate analysis, then none is required for HB all the more. Am I right or do you think that the interaction search tool is not sufficient to argument for a non-inclusion of the interaction effects?
Add-on: Due to the sensitivity of the test, maybe a 99% interval is best?
That's a good question whether if no interaction effect is detectable from aggregate (pooled analysis) then the interaction effect shouldn't be valuable if using individual-level models (such as HB).  I would tend to agree, but I can concoct in my mind situations where differences in the directionality of interaction effects at the segment or respondent level  could make them disappear when analyzed in the aggregate.  So, it's not a given.
Apologies, I just re-read Xinlei's post and she mentioned that she imported the .hbu file into the interaction search, meaning that individual level estimates are taken into account, which is a better approach than plain pooled analysis.
In this case (taking into account the .hbu file), would you recommend to go with a 95% or 99% CI? The Lighthouse manual states that in order to reduce the likelihood of false positive it would be good to use a higher threshold than 95%. However this only relates to the pooled analysis without taking into consideration the individual estimates via .hbu file.

Highly appreciate your advice on the recommended CI
I think it's a matter of preference and how aggressive or not you want to be about modeling potential interaction effects.  I think I'd be less aggressive and demand a 99% confidence (p value <=0.01 to want to include an interaction effect when using this interaction search approach leveraging the HB utilities (as provided by Lighthouse Studio).
Many thanks for your answer to my question, Bryan.
Highly appreciate your discussion about interaction.
...