For the HB report that shows the standard deviations of the part-worth utilities, we depart from Bayesian stats and revert to Frequentist stats that are much more familiar for most market researchers. Bayesians will roll their eyes, but this is what we do: we estimate a single point estimate for each part-worth for each respondent as the average of the "used" individual-level beta draws. Then, we compute the standard deviation across respondent part-worths in the traditional way (where variance is the average squared difference between each respondent's observation and the mean observation for each part-worth; and the standard deviation is the square root of that).

Now, our HB software also outputs a covariances file that contains the estimate of the covariance matrix (we call it D) at each iteration. To make it easier to read in a flat file, we've reported these data as a single row per iteration, where the matrix entries are set into the various columns of the .CSV file. Look at the column labels to identify each row x column entry.

The summary file shows the average variance/covariance matrix. But, this is not how we estimate the standard errors for each part-worth.

There are two ways to obtain SEs for part-worths:

The Frequentist approach: take the standard deviations (across individual respondents' point estimates of part-worth utilities) described above and divide by the square root of the sample size.

The Bayesian approach: open the alpha.csv output file that contains the successive estimates of the population means at each iteration. Throw away the iterations prior to assuming convergence. Then, for each part-worth utility, compute the standard deviation across those draws. That's the standard error (the standard deviation of the sample mean). To obtain the 95% confidence interval, sort the estimates of alpha from low to high values. The 2.5 percentile value to 97.5 percentile values in each distribution are an empirical representation of the 95% confidence interval.

Just to make sure I understood correct. I got the following average utils and st.deviations after running an HB estimation (hb_report.xls)

Attribute Avg Utilities (rescaled) St.Deviations

Level 1 21.07 27.43

Level 2 38.18 27.18

Level 3 -47.99 49.44

Level 4 -11.27 47.26

and so on.

Can I now take these st.deviations to calculate standard errors?

For example, for the first part-worth SE=27.43/sq.root(2073)=0.6 (since 2073 is my sample size). Is this correct? These SEs do not need to be rescaled anymore, right? Can I then use SEs and part-worths to calculate t-ratios? (beta/SE=21.47/0.6)

Regarding the covariance matrix, why can't we use the variances to get SEs by taking the sq. roots of variances?

Does it matter that one uses frequentists stat to derive st.deviations and SEs? (since we are running an HB)

Thank you!