Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

quality indicators for CBC Logit model analysis results

Hey there

I searched the forum but didn't find a satisfying answer to my very basic question.

Regarding to CBC-Logit analyses: How can somebody evaluate if one model is "better" then another one?

For example:
- Model A is a model without interactions (only main effects)
- Model B is a model with interactions

Usually if you add more variables the model gets "better" in terms of the differences in Log-Likelihood value  becomes bigger (compared to the nullmodel which is the same in both compared models).

I know that we now can calculate the p-Values to determine significance of the model. Also we can check the t-ratios for significance and of course we need to interpret the RLH Fit statistics.

But that all does not make sense in respect to the situation that the Log-Likelihood value simply becomes "better" (it drops closer to zero) if we add variables.

According to that, somebody could think of "as many variables (e.g. interaction) I add to the model, the better it gets", which is definitely not the true.

I also heard of "overfitting" a model, but this is not well described in the sawtooth-help  and I also do know that in the end somebody has to choose a model that fits best to the questions you want to answer. But beside that I am searching for other indicators beside the content oriented to justify the model selection.

I am happy if somebody could provide me information on those questions and I do apologize if I didn't find a post that already handles this.

Many thanks!
asked Nov 27, 2018 by bs77 Bronze (815 points)

1 Answer

+2 votes
See Chapter 12 of our "Becoming an Expert in Conjoint Analysis Book", the 2-Log Likelihood test: http://www.sawtoothsoftware.com/169-support/technical-papers/cbc-related-papers/2033-statistical-testing

Sections 12.2.b and 12.2.e of that chapter.

Indeed, every time you add new terms to the model (whether useful or not), the fit statistic (log-likelihood) improves.  This statistical test checks whether the additional terms you added to the model (such as interaction terms) provided statistically significant improvement to the fit.

Most of our users employ HB for their final model.  Interaction terms that are found to be significant from aggregate logit analysis usually go away and are not significant under the HB model (which estimates utilities for each respondent rather than an average across groups of respondents).
answered Nov 27, 2018 by Bryan Orme Platinum Sawtooth Software, Inc. (189,140 points)
Hey Bryan,

Thank you for your helpful answer.  I didnt know about this section on the sawtooth website.

May I add two follow up question? In the chapter itself, it is described how to deal with one  interaction effect (price & brand). What if there are more interaction effects like also price & brand?

How much is too much? (that is what i meant in my first post with the word "overfitting".

How can one evaluate which interactions to include an which not? So for example the counts indicate that there are significant interactions between all attributes (2-way), whereas the interaction search indicates that there are interactions between Attribute A with B, C and D.

Thanks again for your reply!
It just follows the same pattern.  You can test for multiple sets of interaction terms added simultaneously to the model.  You just need to look at the 2 times the difference in LL of the models (the new model vs. the old model) and use degrees of freedom equal to the number of new parameters you added to the model.

Most researchers who use our software use HB for the final model, and using HB means a different approach to examining potential interaction effects.  Our CBC/HB Model Explorer tool (frree power tool from our website) is probably the best approach for that.

A big mistake is to try to include non-significant interaction terms in CBC models...interactions that seemed significant when looking at aggregate counts and aggregate logit, but that disappeared and were no longer meaningful when heterogeneity was captured in new models like latent class and HB.
Hi Bryan,

thx this sounds promising.

I just wanted to try the model explorer you recommended. Unfortunately the path to the Command Interpreter (CBCHBCon.exe) cannot be found.

Do you have any ideas why this can happen, or where to find the .exe file?

many thanks!
Have you installed CBC/HB first?  http://www.sawtoothsoftware.com/support/downloads/download-cbc-hb

Then, you have to export your CBC data to a .CHO file.  The first thing the Model Explorer does (after connect to CBC/HB software) is to ask you to browse to your .CHO file.
that is an absolute reasonable question. And no, I didn't . I thought it is included in the lighthouse studio and didn't know that this is a whole different software.
If you are licensed to use CBC within Lighthouse Studio, you are also licensed to use the standalone CBC/HB system.  But, you have to install it separately.  (It will look to your subscription credentials from the Lighthouse Studio).