Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Choice Simulation based on HB

What does the Lighthouse Studio Help exactly means, when it says "If we were tasked with presenting simulation results to an academically-minded audience, and if we had the luxury of greater time to conduct the analysis, we would probably take the approach of simulating on the HB draws rather than using RFC"?

I don't understand what "simulating on HB draws rather than using RFC" means. As I understood until now, one can use HB results to perform Choice simulator, while RFC is just an option within that simulator. What did I get wrong?
asked Mar 7 by bs77 Bronze (855 points)

1 Answer

0 votes
Good question.  When HB runs, it typically performs (if using defaults for CBC/HB in our software), 10K "burn-in" iterations (draws) followed by 10K "used" iterations (draws).  The burn-in iterations are ignored, as convergence isn't yet assumed.  The 10K "used" iterations we're simply averaging across to produce a single summary set of utilities (point estimates) for each respondent.  This is practical and efficient for researchers in the trenches.  Most clients expect just one record for each respondent, so it also fits better with what clients are used to.

However, the academically correct and pure way to use HB results is to take (sample), say, every 10th draw for each respondent, such that there are 1000 separate propositions for each respondent's set of utilities.  Academics like to simulate on these draws, making it look as if each respondent has 1000 "votes" in the simulator rather than one "vote" (and the logit equation / share of preference equation can be used for each draw to probabilistically split the votes across product alternatives).  

To reduce IIA problems, one could use the first choice rule for each draw, meaning that each person would be simulated to have 1000 first choice votes...and again we summarize within a respondent to tabulate the 1000 votes across the alternatives...then summarize across respondents.

RFC is like a "poor man's" version of simulating on the HB draws using the first choice method.  We've compared the results many times between RFC operating on a single set of "point estimate" utilities vs. first choice on the HB draws, finding very similar results.  But, RFC is much faster and uses less hard drive space.  So, the "poor man's" approach is still rather rich.
answered Mar 7 by Bryan Orme Platinum Sawtooth Software, Inc. (199,115 points)
Thanks Bryan (once again).
The choice simulator provides very comfortable information on every possible scenario. Like when I have 3 attributes with 4 levels each in the choice experiment, it is possible to simulate the SoP for all 64 scenarios (4*4*4).
As I understand, you suggest to not use the RFC method, as HB Reg is "better". But when I want to get information on each scenario, what method do you suggest to use for the choice simulator? Maybe I missed something and I don't need the simulator as my HB results already provide the needed information.
And also: is for the input data of the choice simulator zero-centered differences or the raw data used?
I'm having some difficulty understanding your comments.  If you are saying that you are trying all 64 renditions of your product simulated against a set of fixed competitors and the None, then academics would argue that simulating on the HB draws is more accurate.  However, in our tests of RFC against simulating on HB draws, we've found so little difference in the results.  Thus, although simulating on HB draws is academically more pure, it's just harder for practitioners, longer to run, and takes more hard drive space.  Your comment suggests to me that you are just looking at average utility scores rather than conducting simulations on how product alternatives compete for choice when placed in competition in the marketplace.  Looking at average utility scores can only get you so far.  For most business decisions, one should conduct market simulations to predict the relative share of choice for the firm's product versus a realistic set of competitors.  Unless you know what the competitors look like, it's hard to know just from average utility scores the best approach to compete in a marketplace.  

The input for the choice simulator are the raw utility scores (which are typically zero-centered), not the "zero-centered diffs" rescaled scores.
the simplified procedure i thought the way to go is like:
1) develop and perform CBC,
2) analyse with HB to get individual utilities,
3) use this individual utilities ("profiles") as input for the Choice simulator to get preferences for each scenario gained by the whole sample. One method to be used in the Choice simulator is RFC.
Is that correct?
Yes, as long as you are talking about simulating each of the 64 possible profiles against a realistic fixed set of competitors versus the None (64 separate simulation runs).  It wouldn't be appropriate to simulating the 64 product alternatives as 64 products in a single simulation run.
So it is appropriate to simulate "product 1" vs "product 2" vs "None" and to receive a SoP value (Sum of SoP = 100) for that. But I need to repeat that 64 times with alternating product combinations.
But it is not appropriate to simulate "product 1" vs "..." vs  "product 64" vs "None" within one simulation., where the sum of SoP would still be 100.
Is this correct?
Why ? Don't get me wrong, but I don't get the benefit of a simulation when I only can simulate one product against another. Especially when I want to get an overview about specific product designs.
Usually, a researcher would be able to represent something like 3 to 12 of the major competitors in the marketplace (including the None), on their appropriate attributes and levels.  Then, the researcher would simulate how the firm's proposed offering would compete against that fixed array of competitors.  

The next step is to conduct sensitivity analysis, by trying different versions of the firm's proposed product against the array of fixed competitors.  So, 64 separate simulation runs: each time simulating one version of the client's product against the fixed array of competitive alternatives.

It isn't as useful to drop all 64 variations of the firm's potential offering into a single simulation scenario, because there's no opportunity to see how the existing array of competitors would influence which product alternative for the firm was best.
I understand that if people are interested in specific product development decisions, this is the way to go.

On the other hand, if somebody wants to get the "big picture" (with no initial preference), it is interesting to gain information about all competing scenarios, no matter if they share a high or low preference.

Consequently, I don't understand, why it shouldn't be "useful" to feed the Choice simulator with a bunch of different scenarios (including the None option) at the same time and let simulation work. As a result SoP is getting distributed among those input scenarios.

So, is this in your opinion not "useful" in a statistic way, or is it not "useful" in a comprehensive way?
Market simulations are most useful when they are done for simulation scenarios that closely mimic the context of the way the CBC question looked to respondents.  If we showed 4 products plus a None, then (from a purist's standpoint) the market simulations should be done within a context very similar to this scope.

As you include more products in the market simulator than were shown in the choice task to respondents, the scale factor (flatness or steepness of the shares of preference) becomes more different from what we'd expect from a model if we had truly asked respondents choice tasks of the same dimensionality.

All that said, if you are just looking to prioritize the likely preference for your 4^3 product scenarios, and you don't care about whether preference for them could be asymmetrically affected by what features the competitors are offering, then putting all 64 products at the same time in the simulator (or just summing their part-worth utilities) would work.

But, if you expect that the best product to offer should depend both on respondent preferences and also what competitors are also covering in the marketplace, then this demands that you set up simulations as your firm's one product offering against the fixed and realistic set of competitors (repeated across the 64 scenarios as 64 separately run market simulation scenarios).
...