1. That research that Walt and I did only applies to MaxDiff and CBC, not ACBC. ACBC is a different animal because it involves three types of choice tasks, appended: BYO, Screener, and Tournament Choices. My opinion is that typical ACBC studies have much more information at the individual level than similar CBC studies; therefore, higher prior variance would be justified than a similar dimensioned study in CBC. Prior DF probably should follow the recommendations in our article related to sample size. So, I'd be inclined to use prior variance of 1.0 for typical ACBC studies, and Degrees of Freedom depending on sample size as reported in our article.
2. For typical ACBC studies and when estimating main effects only, 20K + 20K should be sufficient. But, it wouldn't hurt to increase both of those settings if you have the time.
3. As long as sample size is above, perhaps, n=200 I'd recommend Otter's method.
4. I wouldn't compare stated BYO with the final "winning" concept, because the winning concept can come from any new concept generated in the near-neighbor design which if using summed pricing (I'm assuming you have summed pricing) can lead to even better concepts than the BYO due to random shocks lower in price.
5. Pct. Cert. is a pseudo R-squared. It indicates how much better the fit HB's MNL model achieves compared to random (uninformed) choices. Pct Cert of 0 means fit equal to uninformed random choices. 100% means perfect predictions and no respondent error. RLH is root likelihood and is challenging to interpret for ACBC studies, since the RLH depends on the number of concepts in each choice task...and ACBC represents a mixture of BYO, Screener, and Tournament choice tasks which each can have different numbers of alternatives per choice task.
6. The Frequentist approach indeed would be to run t-tests or F-tests in SPSS based on groups and zero-centered diffs. There are Bayesian approaches to statistical testing between groups that would involve running HB with covariates within our software.