Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How to calculate significance for multiple product alternatives in the choice simulator?


I know it is possible to compare two products (alternatives) in the choice simulator and calculate significance. But thats not what I am looking for.

What I try to find out is, if  there is a way to get the "confidence" of the SoP calculation in the choice simulator?  So if we assume, that the choice simulator offers me the possibility to calculate dozens of product alternatives at the same time (depending on the attribute levels). How can I strengthen my observations with confidence (p-values?)

Or maybe that doesn't make any sense at all...I am not sure here but I would be happy if someone could help me out.

Thanks a lot,
asked Jun 28, 2019 by bs77 Bronze (790 points)

1 Answer

0 votes
You can get the confidence interval around your share estimate by adding plus or minus 1.96 times standard error (this is for a 95% confidence interval).  Is this what you're wondering?
answered Jun 28, 2019 by Keith Chrzan Platinum Sawtooth Software, Inc. (95,675 points)
Ok, but the confidence intervals are already calculated by the choice simulator.
I am looking to find a p-value for each share estimate itself. Is this possible?
OK, take a share, subtract zero and divide by the standard error of that share.  This is the Z-value, which you can look up on a standard table (or calculate in Excel.  For example, if it's 1.96 then your p-value is .05.
I am confused. "subtract zero"? May I construct an example for better understanding?
share estimate: 3.234 %
std. error: 0.272 %
N: 800
Can you please show me how the calculation you mentioned would work.
Thanks a lot Keith!
3.234-0 = 3.234
You can look that up on a table from a stats book or calculate it in Excel, but you'll find that the p-value is say way less than 0.01.
Thats great. Ok. I was close but I missed to convert the valuers from % to numbers in the Simulation Report XLS export.

May I ask one follow up question: How do you interpret this p-vale in this context? What does a 99.9 % confidence mean. Could I run 1000 simulations and only in one simulation the value would differ?
A 0.01 p-value in a simulation means just what it does anywhere else - it means that only 1 time in a hundred would you get a share that large if it REALLY was 0.0.  If you're unimpressed by this finding you'll see why we usually don't expend a lot of breath talking about p-values in simulations.