Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Choice-based vs. Allocation

Imagine a conjoint with doctors as respondents
Version 1:
Think about your last patient with this condition (and we record the key characteristics of this patient). Of these 6 drugs (alternative-specific design, some drugs are fixed, some drugs have attributes, one of the drugs is the NEW drug) - which one would you prescribe to such a patient? - so it's a choice task.
Then: think about another specific patient... and a CBC for this patient.
At the end - based on some secondary data that tells us about the distribution of patient characteristics in the population, you piece it all togehter by weighting things for different patient types and get an aggregate picture (distribution ) across drugs across patient types.

Version 2
Allocation-based conjoint. Think about all your patients with this condition. For each screen, indicate what % of your patients you'd allocate to each drug?

Is there any research that shows which approach predicts real life choices better? Or just thoughts?

Thanks a lot!
asked Jan 30, 2014 by Dimitri Bronze (625 points)
retagged Jan 30, 2014 by Walter Williams

1 Answer

+1 vote

Your Version 1 uses specific patients and relies on physician recall of those patient’s details.  Variants include using the last X patients, using X specific patients whose records have been selected by the physicians’ staff and so on.  This version and all of its variants suffer from non-random sampling of the patient population.  All sorts of memory biases affect the doctor’s recall and prevent the sample of patients he comes up with from being representative.  Also, unless his records staff are also sampling statisticians, any sample they draw will likely not be representative, either.  

When I have been able to compare the patient populations which physicians recall to the actual % of patients of each type, the results were not close.  That may not be important, however, if you weight results to the true market proportions of patient segments.  

Another possible drawback from Version 1 is that patients may not fit neatly into the categories you will want to distribute them into (this happened to me a few times with this kind of study – patient heterogeneity was richer than the segmentation we had in place).   To the extent that extraneous factors may affect therapy decisions in specific flesh and blood patients in a way that is not representative, this , too can throw off your forecasts.

The problem with Version 2 is that they physician’s memory will likely tend toward typical patients or toward recent exceptional ones.  The sample the physician draws constructs memory could bias the model quite a bit and what we know about memory should not give us a lot of confidence about this approach.  It isn’t like asking a consumer to predict their next 10 breakfast cereal purchases.  In fact, when I’ve been in a position to compare physicians “next 10” predictions (or their “last 10” or % reports of recent past prescribing behavior) these predictions matched up very poorly with behavioral data about prescriptions written which we had at the individual physician level.  

I am unaware of much academic research comparing these Versions 1 and 2.  One informal study I conducted found the same answer both ways, after taking into account that Version 2, as one might expect, has a logit scale factor that makes for smaller utilities.  

A third option resembles your Version 1 except that instead of relying on physician’s recall of specific patients, we supply archetypal patients in the survey.  For example, if there are 4 distinct patient types (whose proportions in the market we know and want to represent) we might ask each CBC question but include 4 rows for responses below the question – one response for each of the 4 patient segments.  Now we don’t have the extraneous patient heterogeneity issue from Version 1 and we don’t have the recall biases of Version 2.

I have never compared this Version 3 to either Version 1 or Version 2 but I’ve used it quite a bit.
answered Jan 30, 2014 by Keith Chrzan Platinum Sawtooth Software, Inc. (111,275 points)
Yes, Keith, very valid points, thank you. Yes, we've done Version 3 as well.
Clarification question: what do you mean that Version 2 has a "logit scale factor that makes for smaller utilities"?

Logit utilities have embeded in them a multiplicative scale factor.  This means that whenever we report  our utilities as U1, U1, U3 and so on, U1 is really scale*u1 and U2 is really scale*u2 and so on.  This factor is set by convention to 1 and we never know in a given data set how much of the magnitude of the utilities owes to this multiplicative scale factor.  

When you have two logit models, however,  they may differ in terms of their scale factors, their substantive utilities, neither or both.  

You can learn much more about this and about the statistical test that sometimes allows you to partially disentangle scale and utilities in the 1993 JMR article by Swait and Louviere.  

For the present purpose, the scale factor of allocation data is smaller than for discrete choices, making the utilities that come out of an allocation-based model smaller, all else being equal than for a choice-based model.  This isn't necessarily a problem, however, since we're often rescaling our utilities before using them in simulations anyway (e.g. using the "exponent" in the Sawtooth Software simulation software).