Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

MaxDiff Analysis in a third party Statistical program (Health Economics application)


I wanted to ask if I could analyze the MaxDiff results in a separate Statistical application like R for example, is that possible and how?

What is the difference between the online and in the program build in MaxDiff analyizer?

I am asking because when I try to download the MaxDiff results, the file extensions available are all the once for the Lighthouse studio application (like .cho) and I am using Case 2 MaxDiff not case 1 (Case 2 is where the attributes are further divided into Levels and each scenario/task is composed of one level of each attribute, thus the levels of the same attribute cant appear in the same Task)

Note: My project is in health Economics so the final results are the parts-worth utilities of each level that when combined in the best scenario should not exceed 1. I am also applying a CBC technique to measure the same utilities and if someone had experience in such a field I would be very pleased to hear from you.

Thanks and Best Regards'
asked Aug 22, 2019 by AMYN Bronze (3,000 points)

1 Answer

+1 vote
These are good questions and both Keith Chrzan and I have experience using Sawtooth Software’s programs for designing, fielding, and analyzing BW Case 2 (best-worst conjoint).  You trick our regular MaxDiff software into doing Case 2 by adding “conjoint-style” prohibitions, which prohibit levels from the same attribute from appearing within the same MaxDiff set.

Our Web-Based MaxDiff platform (Discover) can design and field BW Case 2 studies.  Our Windows-based platform for web survey development (Lighthouse Studio) can also do this.

It is possible to export the raw design and choice responses for your completed respondent records.  The Lighthouse Studio software gives it to you in a .CHO file, which has some pros and some cons (mostly cons, I think).  The Discover platform exports the raw data in a more friendly rectangular .CSV file.  With some additional data processing work, you could format the data for analysis in an outside package like R.

But, you might find it convenient to use our built-in MNL tools for estimating scores for your BW Case 2 experiment.  Our tools can do aggregate analysis or HB-MNL analysis for individual-level scores.

You mention wanting to compare results from a standard CBC to a BW Case 2 study.  I did something similar (using the same attribute list for CBC and BW Case 2) a few years back and I detailed how the coding of the design matrix is done for both CBC and Best-Worst Case 2.  Please note that I took the approach of computing scores for each level in the Best-Worst Case 2 study, rather than computing a separate average weight for each conjoint attribute along with the adjustments to the average weight for each level within each attribute.  (Both types of coding the design matrix will lead to the same model fit and the same predictions for simulated product concept choice likelihood).

Anyway, you can download the paper I’m describing at:

answered Aug 22, 2019 by Bryan Orme Platinum Sawtooth Software, Inc. (181,240 points)
Thank you, Brian, for your detailed reply.

I have already completed designing and fielding my study using the Windows Based Sawtooth Lighthouse Studio, now it is time for the analysis. So I was a wonder if somehow the .cho file could be manipulated for the use in another statistical program or if there any possible solution for that. I plan to use the analyzer build in the program but wanted to have more flexibility to use the results in another statistical program in case the analysis required more time exceeding my license use period.

Regarding the article you referred to, I will download it and read it carefully. I believe it will answer a lot of questions for me and will be of great help.

The .CHO file certainly has everything you need to conduct the analysis in a different software program, but you will probably need to conduct some data processing and rearranging of the data to prepare it to be in the proper format for a different program to analyze.  

The .CHO file shows you which items were shown in each choice set and which items were selected as best and worst.  That's all you need to analyze the MaxDiff data in another program.

Another format to get the MaxDiff data out of Lighthouse Studio for analysis in another program would be to export the experimental design (each block or version of the questionnaire) by opening your MaxDiff exercise from the questions list, clicking the Design tab, and then clicking Export Design.  This will export the design as a .CSV file.  Next, you would need to export each respondent's answers to the MaxDiff questions and which version they saw from the File + Data Management + Export Data area.  You would need to do the data processing to merge the two files (assigning the right version of the design to each respondent) and further preparing the data for analysis in an outside program.
Oh, another thought.  You can always very easily get the utility scores out of Lighthouse Studio and move them into a different analysis package...if you want to use Sawtooth Software to estimate the utilities, but you want to use a different analysis package for utilizing those utility scores.
Thank you very much for the detailed answer. I hope that the third-party program can read the .CHO file, otherwise it might be a lengthy process and prone for errors.
I will keep that in mind as I am still learning what are the different possible statistical analysis methods that I can use to gey my utilities then constrain them to (0-1) or (-1-0) range. I need to do this by different statistical models and then compare the models to get the best fit model. Thanks again for the ideas and thoughts.
Hi Bryan! I'm facing a client that might want to do latent class segmentation off of Anchored MaxDiff results for up to 13 separate analyses (!). Given the steps necessary to export the design independently from the responses, do data processing with the Anchored questions to establish the threshold for the model estimation, and match to responses, I think it would be easier for most users to just export the Probability Scores (presumably) and use the Magidson/Eagle approach for continuous scale factor presented at the recent Sawtooth conference. Of course, you'll need the software to do so.