I am not sure that the design test uses allocation data to test your design. The design test is a strong test for identification ("will I get a model with utilities for all levels of all attributes or will it crash?") but the actual standard errors depend on the actual utilities (the test assumes all utilities are zero) so that test is in this way an approximation from the start. In addition the design test assumes an aggregate logit model when we're usually running HB or latent class MNL instead, so again the test is an approximation to help us make decisions about our design.
If you really want to test your design to your situation you'll want to generate some realistic data for your design (and for each respondent if you want to run an HB model) and then analyze your generated data set on your design. Of course this is a lot of work and folks mostly don't want to do it.
Also, I'm not sure I'd say we get more learning from each screen with an allocation. While that MAY happen, what also happens is that the model with allocations has more response inconsistency, even within respondent. That means the utilities tend to be smaller with allocations than with single choices (see the paper by Chrzan and Alberg in the 2001 Sawtooth Software Conference Proceedings, particularly the small table on page 180).