Good question. When HB runs, it typically performs (if using defaults for CBC/HB in our software), 10K "burn-in" iterations (draws) followed by 10K "used" iterations (draws). The burn-in iterations are ignored, as convergence isn't yet assumed. The 10K "used" iterations we're simply averaging across to produce a single summary set of utilities (point estimates) for each respondent. This is practical and efficient for researchers in the trenches. Most clients expect just one record for each respondent, so it also fits better with what clients are used to.
However, the academically correct and pure way to use HB results is to take (sample), say, every 10th draw for each respondent, such that there are 1000 separate propositions for each respondent's set of utilities. Academics like to simulate on these draws, making it look as if each respondent has 1000 "votes" in the simulator rather than one "vote" (and the logit equation / share of preference equation can be used for each draw to probabilistically split the votes across product alternatives).
To reduce IIA problems, one could use the first choice rule for each draw, meaning that each person would be simulated to have 1000 first choice votes...and again we summarize within a respondent to tabulate the 1000 votes across the alternatives...then summarize across respondents.
RFC is like a "poor man's" version of simulating on the HB draws using the first choice method. We've compared the results many times between RFC operating on a single set of "point estimate" utilities vs. first choice on the HB draws, finding very similar results. But, RFC is much faster and uses less hard drive space. So, the "poor man's" approach is still rather rich.