# How to interpret interactions effects (ACBC)

Hi,

I don't really understand how to interpret interaction effects which are generated by the interaction search with the lighthouse studio software (ACBC).

I realized that the p-value tells if an interaction effect is significant. But how do I have to read the other values?

I analyzed the attributes of employees with the ACBC method. One example of an interaction effect:

Run:
salary x reputation

Parameters in Model:
31

Log-Likelihood Fit:
-8357,229676

Chi Square Value:
48,20844118

2LL P-Value for Interaction Effect:
9,01314E-08 (--> it's highly significant)

Gain in Pct. Cert. over Main Effects
0,21%

How do I have to understand the values? What do the Log-Likelihood Fit and the Gain mean?

I would like to write something like: A low salary is equalized by a good reputation. But I don't know if its permissible.

Regarding your hypothesis "a low salary is equalized by a good reputation" could probably be answered just by the main effects (the independent utility effect of the attributes).  However, if there was a strong interaction between those two attributes, then it would be important to include them in the model.

With choice models, we measure the fit of the utilities to people's choices.  Let's say that the utilities predict (with 60% likelihood) that a respondent will pick a particular alternative in a particular choice set and let's say that the respondent actually does pick that item.  We say that the likelihood of the utilities fitting that person's task is 60%.

Now let's say a second task for that person is also predicted at 60% likely (by the utilities) and the person actually did pick it.  Now, the total likelihood (the joint likelihood) across the two tasks is 0.6 x 0.6 = 0.36.

The problem becomes when you keep multiplying out these likelihoods across dozens or thousands of choice tasks.  The numbers get so close to zero that it often becomes hard for computers to retain enough precision.  So, statisticians have done a little transformation that mathematically keeps track of the fit in an equivalent way.  They take the natural log of the likelihoods for each choice task and add them across choice tasks.  So, the natural log of 0.6 is -0.51083.  Add likelihoods across thousands of tasks and you get negative values like -8357 in your example.  Not very intuitive!  But, the log-likelihood numbers give you the ability to use familiar chi-square statistics for statistical testing.  You build two models (say, one with main effects only and the other with main effects plus interaction effects) and you compare the log-likelihood between the two models.  Actually, twice the difference in the log-likelihoods is distributed as Chi-Square, with degrees of freedom for the Chi-square test equal to the difference in the number of utility parameters you fit in the models.

So, you see in your test that a Chi-square is listed, with a p-value (the likelihood of observing a Chi-square stat that big by chance).  With these aggregate (pooled) logit tests, interaction effects are often statistically significant, but they can be practically very small.  That's because pooled logit leverages a very large amount of data to estimate a relatively few number of utility values.

In managerial influence terms, it's probably better to think about how much the fit (in terms of RLH) actually is improving.  RLH is the root likelihood.  Recall I earlier gave the example of utilities predicting a 60% likelihood of a person picking a given alternative that they actually end up choosing.  That's a likelihood fit of 0.6.  Root Likelihood is an average (a geometric average) of the likelihood fit across all tasks.  If you can improve that fit (by adding an interaction term) by a healthy margin, then perhaps the interaction is doing a lot.  In your example, improving the likelihood by 0.21% doesn't seem like much to me.

I should note that these tests built into Lighthouse Studio are based on pooled (aggregate logit), which is not really aligned with the utility estimation approach that most Sawtooth Software users end up using in practice: HB.

We have developed a better approach for testing the effect of interactions in ACBC or CBC.  It's called the "CBC/HB Model Explorer": http://www.sawtoothsoftware.com/support/downloads/tools-scripts
answered Jan 28, 2017 by Platinum (175,290 points)