We don't currently have any functionality for a standard CBC to try to estimate utility scores for respondents during the survey. Individual models can often be pretty noisy and is why HB is the recommended approach with a sufficient sample size. The ratings-based ACA exercises do have this capability, and the choice-based Adaptive CBC asks the respondent at the beginning of the exercise to define their ideal product. You can then reference the ideal product or the winner of the tournament section later on in the survey, though those are just based on the answers and aren't actually doing real-time utility estimation like the ACA exercise does.
In theory you could potentially come up with something if every respondent saw the same set of questions, like counting up how often each level was chosen across all of the choice tasks, but that would require some legitimate programming skills to write that type of code yourself and have it running in the background of the survey.