Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

ACBC and Latent Class segmentation

I am currently writing my master thesis, and would like to perform a ACBC study. I am well known with the process of CBC within Lighthouse, but noticed the segmentation process is a whole lot different for ACBC-analysis.
I would like to know if Latent Class is suitable when using ACBC to find preference-based segments?

I found contrasting arguments regarding using ACBC data with LC analysis.
In the article of Cunningham (2010), a ACBC study is cited which used Latent Class for segmentation, Jervis & Drake (2012) used it in their CBC vs ACBC comparison study, in the ACBC technical paper (2014)  it is briefly stated that you may export the ACBC data for Latent Class analysis. Yet, on this forum Orme warns to not use LC for segmentation with ACBC because of the screener section. Other than that, this seems to be somewhat of a vague/ambiguous topic to me.

I'd like to know if there is some sort of consensus on this matter. Secondly, what is the proper tool for finding preference-based segments on the base of the ACBC-data (HB)?

I hope someone could shed some light on this matter for me. Thank you!
asked Jul 24, 2019 by ash (180 points)

1 Answer

0 votes
This is a great question and I'm glad for the research you've already done on the topic!

The reason we didn't include Latent Class MNL as a capability within the ACBC software is that the prevalence of respondents sorting concepts into the "possibility" vs. "not a possibility" bucket from the Screener section is captured in the "None" parameter, and this could very significantly drive the Latent Class MNL segmentation.  You might find segments that were strongly influenced by "yeah-saying" bias, or the propensity to be agreeable in the screening section.  We didn't want this.

However, I strongly recommend using the normalized (to Zero-Centered Diffs) ACBC utilities (not including the None parameter) in latent class clustering or K-means, or Sawtooth Software's CCEA system for cluster ensemble analysis.

In other words, you run HB and take the zero-centered diffs normalized utilities.  Delete the column for the "None" utility; but submit all the other part-worth utilities to the k-means clustering, CCEA ensemble clustering, or latent class clustering algorithm.  By the way, Sawtooth Software does not offer latent class clustering software (but Latent Gold software does).

Again, to clarify: Sawtooth Software offers Latent Class MNL for CBC and MaxDiff.  Latent Class MNL is not available for ACBC within our software.  Latent Class MNL is a procedure for estimating part-worth utilities while simultaneously detecting segments.  Latent Class clustering is used when you just have basis variables but no dependent variable (e.g., normalized part-worth utilities are the basis variables).

Hope this helps!
answered Jul 24, 2019 by Bryan Orme Platinum Sawtooth Software, Inc. (181,965 points)
edited Jul 24, 2019 by Bryan Orme
Thank you so much Bryan, for your detailed and quick response! I have a few questions after your response, and please excuse me for the amount of text. I thought this would be easier instead of asking the questions one by one.

In have been doing some digging into how to do my clustering/segmentation analysis, based on your feedback. I have some options in mind, and i would really appreciate it if you could steer me in the right direction a little bit. To give you some context, my goal is to identify preference based segment. I want to know how big these segments are, and combine them with some profiling variables to characterize the segments.

The most easy option for me is to use the HB utilities (Zero-Centered Diffs) in combination with K-Means clustering using SPSS. But i also think this is the most inferior one, would you agree? Since the clusters will be driven highly by the most important attribute(s). With Latent Class, you can see the attribute importances as well, per cluster. To assign the attribute importances to the K-mean clusters, is this just a matter of calculating the centroids so you just end up with the mean attribute importances per cluster?

Another option as you mentioned is CCEA, which can use mixed methods (like k-means) in the cluster analysis. In terms of using this with ACBC data, would you rate this over using Latent Class as clustering method?

Latent Class segmentation based on the HB-utilities(ZC) would seem like the most robust form of segmentation in this matter. But on the other hand, the most complex one. When i am going for this option, i am opting to use XLSTAT-Latent Class software. It is build on the principles of Latent Gold. What would the procedure for this be like, do i only take the HB-utilities(ZC) and not the attribute importances and NONE option into the analysis? I have no clue since i am used to do the Latent Class Analysis with CBC in Lighthouse. I discovered it would have made my life much simpler if i could've used the Sawtooth Latent Class standalone software for my Latent Class clustering analysis, but so be it.

My final questions is regarding the use of a Logit Analysis for my ACBC study (by doing a 1-group Latent Class Analysis). This is common practice for a CBC study analysis, but what would you say is the added value for doing this for ACBC since i already have the utilities/attribute importances within the HB output? Is this to look at the t-stats of the utilities and overall modell fit examination?

Thank you in advance, and again, excuse me for the amount of questions. Since ACBC is a relative new method, and a less used type of conjoint, info concerning some topics are hard to find. Therefore i really appreciate your help!
In my opinion, allowing the clusters to be more greatly influenced by the more important attributes (because their levels have larger variance) is actually a good thing for preference segmentation from conjoint analysis.

Using any one K-means procedure (such as SPSS) is not as strong as leveraging an ensemble of different  cluster solutions.  Latent class segmentation, in our limited experience, has worked very well for many data situations.  In our most recent investigation, it often produced superior solutions; but in some cases it produced inferior ones.  So, we think that an ensemble solution that leveraged many types of clustering  approaches (latent class included) would produce the most defensible and reproducible clusters.  Our CCEA system can either build the ensemble automatically (sans latent class), or it can let you add new solutions (such as a series of latent class solutions from an outside software source) to the ensemble prior to its consensus steps.

I think you should drop the None column from the segmentation analysis (for developing the clusters).   When you are using the ZC-diffs utilities as basis variables, those variables themselves incorporate the attribute importances.  So, there is no need to add the attribute importances as new basis variables to the clustering procedures.

It is possible to analyze ACBC data using aggregate logit, but I don't see the value in doing this.  Utilities and importance scores from an aggregate (pooled) run are usually inferior to the utilities and importance scores that are estimated at the individual level via something like HB.