Pooled (aggregate) logit can give a distorted view of "attribute importances", because if people disagree about an unordered attribute (like brand or color), their disagreement can cancel out in the aggregate, making it look like the population doesn't care much about brand or color. (It sounds like you understand this, but I'm stating it for others on the Forum who might be reading.)
Importance scores quantify the range of utilities and put them on a scale summing to 100%. It's more reflective of what respondents think about the impact of attributes to compute this at the individual level, or with latent class segments if you use a handful of segments or more.
I don't think it would be a big mistake to report importances from a latent class MNL or an HB-MNL, though importances can be misleading to interpret. The range of levels for attributes the analyst chooses to include in the experiment has a direct bearing on the importance one gets for attributes. Also, the importance score capitalizes on any difference between best and worst levels for a respondent, even if that difference is due to random noise.
I think the bigger potential flaw in your approach is to rely on aggregate logit models to estimate WTP. Such an approach often exaggerates WTP. If interested, see this white paper we recently wrote that outlines what we think is a better approach to WTP:
https://sawtoothsoftware.com/resources/technical-papers/estimating-willingness-to-pay-in-conjoint-analysis