Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Meaning of St.Deviations in HB results?

Hi,

when I run a CBC/HB, I get an hb-report with the rescaled (also raw) average part-worth utilities as well as standard deviations (SD) for each attribute level. What do these SDs represent here? Do these represent by how much part-worth utilities of respondents vary from the average part-worth utilities?

Additionally, I get from cbc/hb estimation a csv file on hb_covariances. When I open it in excel, I can see only columns of iterations for each attribute level (in a column). I was expecting to see a matrix of variance and covariances. What do those values in columns in hb_covariances file represent?

Finally, I get also a hb_summary.txt file where a matrix of variances and covariances can be seen. Can the variances of this file be used to calculate the standard errors for each attribute level? Or how do I get the SEs?

I would greatly appreciate any help.

Best,
asked Dec 9, 2015 by anonymous

1 Answer

0 votes
For the HB report that shows the standard deviations of the part-worth utilities, we depart from Bayesian stats and revert to Frequentist stats that are much more familiar for most market researchers.  Bayesians will roll their eyes, but this is what we do: we estimate a single point estimate for each part-worth for each respondent as the average of the "used" individual-level beta draws.  Then, we compute the standard deviation across respondent part-worths in the traditional way (where variance is the average squared difference between each respondent's observation and the mean observation for each part-worth; and the standard deviation is the square root of that).

Now, our HB software also outputs a covariances file that contains the estimate of the covariance matrix (we call it D) at each iteration.  To make it easier to read in a flat file, we've reported these data as a single row per iteration, where the matrix entries are set into the various columns of the .CSV file.  Look at the column labels to identify each row x column entry.

The summary file shows the average variance/covariance matrix.  But, this is not how we estimate the standard errors for each part-worth.  

There are two ways to obtain SEs for part-worths:

The Frequentist approach: take the standard deviations (across individual respondents' point estimates of part-worth utilities) described above and divide by the square root of the sample size.

The Bayesian approach: open the alpha.csv output file that contains the successive estimates of the population means at each iteration.  Throw away the iterations prior to assuming convergence.  Then, for each part-worth utility, compute the standard deviation across those draws.  That's the standard error (the standard deviation of the sample mean).  To obtain the 95% confidence interval, sort the estimates of alpha from low to high values.  The 2.5 percentile value to 97.5 percentile values in each distribution are an empirical representation of the 95% confidence interval.
answered Dec 9, 2015 by Bryan Orme Platinum Sawtooth Software, Inc. (178,815 points)
Many thanks Bryan!

Just to make sure I understood correct. I got the following average utils and st.deviations after running an HB estimation (hb_report.xls)
Attribute    Avg Utilities (rescaled)    St.Deviations
Level 1       21.07                                        27.43
Level 2       38.18                                       27.18
Level 3       -47.99                                     49.44
Level 4        -11.27                                    47.26
and so on.

Can I now take these st.deviations to calculate standard errors?
For example, for the first part-worth SE=27.43/sq.root(2073)=0.6 (since 2073 is my sample size). Is this correct? These SEs do not need to be rescaled anymore, right? Can I then use SEs and part-worths to calculate t-ratios? (beta/SE=21.47/0.6)

Regarding the covariance matrix, why can't we use the variances to get SEs by taking the sq. roots of variances?

Does it matter that one uses frequentists stat to derive st.deviations and SEs? (since we are running an HB)

Thank you!
Yes, you can compute standard errors by dividing the standard deviations by the square root of sample size.  The T-test you get is based on the null hypothesis that the utility weight is zero.  Because the utilities are zero-centered, at least some of the weights will likely fall around zero...which doesn't mean that the utility weight is not meaningful.  Also, if half the respondent think red is better than blue and the other half think blue is better than red (only these two colors in the attribute), then T-ratios could be 0 even though individually respondents felt that color really mattered.

I think that the square root of the variances in the var-cov matrix for HB are indeed standard errors (population mean standard deviations) for the beta weights.  Compare those to the standard deviations of the used draws from the alpha.csv file to check that they are nearly identical.

Frequentist stats are technically not correct when applying them to HB estimates.  So, to be formally pure, one should use the Bayesian stats.  But, for most practitioner purposes, using the frequentist stats that we are reporting in our software should be sufficient.
Hi Bryan,

thank you very much for the detailed answer and I'm sorry that I will bother you again with my questions.

I tried to calculate t-ratios with the st.dev for each part-worth given in hb_report file ( by dividing the mean effect by the st.error which I got by dividing st.devs by the sq.root(n)).

For ex., part-worth=-2.1, SD=18.35, n=2073
SE=18.35/sq.root(n)
t=-2.1/0.4=5.25

However, when I count for what percent of the rows (iterations) of this part worth has the same sigh in the alpha.csv file, I get only 90%. As far as I understood correctly, this would mean that the effect is not significant.

How do you explain the difference in significance resulted from alpha file and from the calculated t-ratio as above?  

Regarding Bayesian way of getting SEs, I need to rescale those SEs to be able to compute t-ratios with the rescaled part-worths, right? Is there any easy way to get the rescaling multiplier?

Many thanks!
I would more trust the Bayesian approach (counting the draws from the alpha file) than the Frequentist approach that we are applying to HB results (point estimates).  The Bayesian approach is true to the analytical method being used (HB and Gibbs sampling) and the Frequentist approach is really not entirely proper to be applying to point estimates (collapsed draws) at the individual level from an HB routine.

90% of the alpha draws having the same sign means 90% confident that it is less than 0.  It isn't 95% confidence (a typical standard used in marketing research hypothesis testing)...but 90% still is pretty good confidence!  If I was 90% likely or 95% likely of winning the lottery, I'd still feel pretty happy either way.  I'd take that bet.

Oh, and yes, examining the alpha file means you are looking at population means based on the raw HB-logit scores.  The raw scores are on a very different scaling than our software's automatic report that gave you means across zero-centered diffs for each respondent on the collapsed draws (the point estimates obtained by averaging the 100s or 1000s of used draws per respondent).  Each respondent has a different rescaling multiplier to put them on zero-centered diffs, so this would be a mess to somehow try to rescale the SEs from the alpha file to make sense with respect to the population means from the zero-centered diffs.  That's a dead end path and not advisable anyway because it is mixing apples and oranges.
...