Wow, Michael, it looks like you've done your homework on norms for design testing using robotic respondents for our aggregate logit-based tests in ACBC! Nice work.
It also seems you're paying attention to our recommendations for degree of random shock coming from our white paper about summed pricing: https://sawtoothsoftware.com/resources/technical-papers/three-ways-to-treat-overall-price-in-conjoint-analysis
...because you're calling attention to the fact that the base price makes up ~50% of the total price of the product, such that only half of the price summation involves correlation with the attribute features. The greater the base price is as a contribution to the product whole price, the less correlated summed price is with the feature attributes in the experimental design--leading to even more precision in the price slope beta.
Also great that you're remembering that we recommend in design testing and sample size planning (for CBC and ACBC) that robotic respondents (random ones) should lead to standard errors for attribute levels of about 0.05 or less.
However, what I don't think we've ever written in our help documentation or the white papers is that the 0.05 standard error recommendation is for standard attributes that are part-worth (effects-coded). I've also found, as you have, that the standard errors for the summed price attribute will tend to be a bit higher for the summed price attribute than the part-worth coded attributes. This is a function of the fact that the summed price attribute (even after shocking with random error) still has a modest correlation with other attributes in the design matrix. (But, not enough to foul up your experiment, in our experience.) So, you'll expect the standard error to float a bit higher than for the non-correlated attributes. As long as you follow the +/-30% or so rule for random shock to summed price, in our experience you'll be fine.