These are good questions and both Keith Chrzan and I have experience using Sawtooth Software’s programs for designing, fielding, and analyzing BW Case 2 (best-worst conjoint). You trick our regular MaxDiff software into doing Case 2 by adding “conjoint-style” prohibitions, which prohibit levels from the same attribute from appearing within the same MaxDiff set.
Our Web-Based MaxDiff platform (Discover) can design and field BW Case 2 studies. Our Windows-based platform for web survey development (Lighthouse Studio) can also do this.
It is possible to export the raw design and choice responses for your completed respondent records. The Lighthouse Studio software gives it to you in a .CHO file, which has some pros and some cons (mostly cons, I think). The Discover platform exports the raw data in a more friendly rectangular .CSV file. With some additional data processing work, you could format the data for analysis in an outside package like R.
But, you might find it convenient to use our built-in MNL tools for estimating scores for your BW Case 2 experiment. Our tools can do aggregate analysis or HB-MNL analysis for individual-level scores.
You mention wanting to compare results from a standard CBC to a BW Case 2 study. I did something similar (using the same attribute list for CBC and BW Case 2) a few years back and I detailed how the coding of the design matrix is done for both CBC and Best-Worst Case 2. Please note that I took the approach of computing scores for each level in the Best-Worst Case 2 study, rather than computing a separate average weight for each conjoint attribute along with the adjustments to the average weight for each level within each attribute. (Both types of coding the design matrix will lead to the same model fit and the same predictions for simulated product concept choice likelihood).
Anyway, you can download the paper I’m describing at:
https://www.sawtoothsoftware.com/download/techpap/Common_Scale_Hybrid_Discrete_Choice.pdf