Using Lighthouse Studio's latent class or the standalone Latent Class Module mainly depends on whether you need to do something customized (by modifying the .CHO file manually) or not. The math is the same. But, if you use the Latent Class standalone system, it is reading in the .CHO file that you export from the Lighthouse Studio system. The .CHO file is dummy-coded and the reporting you'll see from the Latent Class standalone doesn't show the utility of the reference level (it isn't aware of the presence of a reference level...it just knows about the columns that were fed to it and are in the design matrix). You just need to remember that the reference level has a zero utility, since MaxDiff coding for analysis using the method of dummy-coding, where the reference level is a vector of 0s in the design matrix.

MAE (Mean Absolute Error) is a measure of fit when you are using a model to predict choice probabilities for held out data (e.g., held out choice tasks). It's rarely the case that MaxDiff practitioners have hold out choice tasks; but it would be possible if the practitioner planned for this ahead of time and included holdout choice tasks in the questionnaire.

From other posts you've made, it seems you are using best-worst conjoint (profile case MaxDiff, or best-worst Case 2). In that case, there is only one reference level set to zero utility across all attributes. One of the interesting outcomes of Best-Worst Case 2 is that the utilities for levels between conjoint attributes can be compared...as opposed to the standard conjoint or CBC case in which they cannot. Therefore, adding the worst levels of each attribute within a best-worst Case 2 study doesn't mean adding zeros. It means adding the utility for the worst levels within each attribute that each are not necessarily zero.

So, if you want the worst product concept to sum to zero across its attribute levels, then you'll need to find the intercept to add to all the utilities of the levels such that the sum of the worst levels across attributes is zero. If you have multiple latent classes, then you'll want to find this intercept for each of the classes. If you are doing 1 class (aggregate logit), then there is only one class and one rescaling intercept.

Next, you want the utilities rescaled such that the best levels across all attributes sum to 1.0. That means you need to take the utilities from the previous step (paragraph directly above) and multiply them by the factor such that the best product has a sum of utilities of 1.0.

Please note that after doing this rescaling it is no longer appropriate to use the logit equation to compute the likelihood of choosing one product concept versus the other! The scale factor has now been modified from its original scale that had been based on the choice probabilities expressed by respondents in the questionnaire.

This process could be applied using HB utilities, by rescaling at the individual level.

Rather, you should compute the best product and worst product's utilities according to the scaling from the latent class utility run or the HB utility run. Let's say you get -1.5 and +3.5 for the worst possible and best possible product concepts.

Next, to rescale utilities for these two products (or any products in between) to have a range of 0.0 to 1.0, just figure out what % of the way the new product is between -1.5 and 3.5. For example, if you get a result of 1.0, then you know that 1.0 is 50% of the way between -1.5 and 3.5. So, the rescaled utility for that product concept (on the 0.0 to 1.0 scale) is 0.5.