# Calculating the RLH for Constant Sum

Hi,

The RLH computation is different when chips are involved instead of single-choice. I would like to know how one precisely calculates the RLH per respondent in case of constant sum. Furthermore, I would like to know why the RLH per respondent is systematically lower in case of using constant sum in contrast to the RLH values per respondent for single-choice.

In CBC/HB single-choice is treated the same as constant sum, with the caveat that the number of chips is 1.  Internally within the build, each concept has the percentage of chips allocated to it.  In a single-choice, the selected concept will have 1 as the response.  If equal chips are allocated to two concepts, then they will both have 0.5 for the response.

There is only one RLH calculation, which is equal to e^(L / W), where L is  the log-likelihood for the respondent and W is the sum of the task weights (the Total Task Weight from the HB settings multiplied by the number of tasks for the respondent).

The log-likelihood for the respondent is the sum of the log-likelihoods for each task.  For each task:

xb = task design multiplied by the estimate of beta (results in one value for each concept)
exb = e ^ xb (one value for each concept)
y = percentage of chips (one value for each concept)

p = product of exb^y for each concept
s = sum of exb

log-likelihood = ln(p/s) * total task weight
answered Oct 22, 2014 by Gold (20,030 points)