Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Which RLH values to report?

I have a question to an earlier discussion concerning which RLH values to formally report (academic study).

I did the following observations with data from a real survey:

1.) Utility report file contains the respondent-level RLH values --> Computed their average which is 0.670
2.) Log file: Computed average RLH based on last 20k out of 40k iterations: RLH = 0.663
3.) same as 2.) just with all 40k iterations: RLH = 0.662
4.) RLH reported by the HB estimation monitor: 0.662 (here some expontential flattening is imposed so I wouldn't take this as the avg RLH metric anyhow)

So when it comes to reporting the average RLH value for all respondents, I'd be inclined to take approach number 1, however I do not understand why there is a deviation between approach 1 and 2/3 (the average RLHs in the log file)? Shouldn't it be the same?

Best regards,
Danny
related to an answer for: Reporting RLH and Percent Certainty
asked Apr 8, 2021 by danny Bronze (1,310 points)

1 Answer

0 votes
For #1, I think you want to compute the geometric mean, not the arithmetic average.
answered Apr 8, 2021 by Keith Chrzan Platinum Sawtooth Software, Inc. (117,375 points)
Thank you Keith!
This gives me 0.665, is closer to #2/3 but still not the same. What could be a reason for the deviation? And why do I think a geometric mean instead of the arithmetic one?
to clarify: for #2 and #3 I also took the arithmetic mean

Bryan commented (https://legacy.sawtoothsoftware.com/forum/24014/compute-rlh-for-hb and here https://legacy.sawtoothsoftware.com/forum/14548/reporting-rlh-and-percent-certainty) that the arithmetic mean should be calculated? I am confused...
My mistake, arithmetic mean is the way to go.  I am not sure why the means are not lining up between 1 and 2, but I'll look into it when I'm back in the office.
My colleague Walt clarified for me:  We compute RLH = exp(log-likelihood / sum-of-task-weights). The weight for each task will usually be 1.0, but the user can set the value to be whatever they want (we cannot do different weights per task at this point).
 
At the individual level, the final RLH is the average of all the draws after convergence is assumed. If you compute a RLH using the final utility estimates, there may be minute differences due to roundoff error.
 
At the population level, we compute the RLH using the total log-likelihood for all respondents and the sum of task weights across all respondents. The ‘average’ statistics reported during estimation (including RLH) are a bit of a misnomer. To keep performance optimal, we don’t keep around a sum or list of these stats for each iteration. Rather, we compute the new value as 99% of the old value, and 1% of the new value. So the value shown in the log and display will not be so easily replicable as they weren’t designed for replication but rather as a quick assessment if the solution has face validity.
...