Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Classical MBC question - information loss

Hi All,
The MBC exercise represents a classical restaurant menu situation designed with a number of pairs of attributes, like this pair below:

Attribute Menu_Item#7:
 - Shown
 - Not shown

Attribute Menu_Item#7_Prices:
 - $10
 - $20
 - $30

The second attribute in each pair was prohibited against the first master one in the same pair via alternative specific design prohibitions.

On each screen respondents were asked to create their dinner by check-boxing from available options and then to rate the dinner on appeal overall.

I am planning to use dichotomized Appeal as dependent variable (Liking vs Not) and planning to use their check-boxed selections (selected checkboxes) as predictors.

To do so I would code menu items selected by respondents as 1 and not selected as 0 (regardless if they were shown or not).

Is there any value in discriminating these zeroes and splitting them into "Shown and Not selected" versus "Not Shown". This would make coding 0-1-2 (not shown, show&not selected, selected respectively).

I see some value in this split, as items "shown but not selected" should be telling these are less preferred by respondent. At the same time I am not sure it would add much to the story as Items not selected should NOT be affecting Appeal.

The extent of Appeal is associated with what has been selected. Disliking non-selected option should not penalize Appeal.

Should I just drop and lose part of available information - the responses telling which items were not selected? Is there any use of it?
asked May 2, 2018 by furoley Bronze (885 points)
edited May 2, 2018 by furoley

1 Answer

0 votes
It seems to me strange to try to model "appeal" as a function of independent variables which are the items selected by respondents.  The independent variables would not truly be independent and level balanced.  They would be based on the respondents' preferences, so the logic seems circular to me.

It seems to me you have two models: how prices and availability of items lead to choice into the shopping cart.  Then, how the utility of each bundle placed in the shopping cart relates to the "appeal" rating scale.  Tom Eagle's two-stage model that he described in his paper seems more related to me:

Eagle, T. 2010. Modeling demand using simple methods: joint discrete/continuous modeling. In Sawtooth Software Conference
Proceedings, pp. 283–308. Orem, Utah: Sawtooth Software.

Essentially, you are building a model of choice first using MNL approaches (like our MBC does).  But, then you build a second regression model that relates the utility of the alternative from the first model to the rating scale from your Appeal variable.

Or there are classes of models that I haven't played around with that involve predicting consideration, and then choice given consideration.  But, your second stage is a rating, rather than a choice.
answered May 2, 2018 by Bryan Orme Platinum Sawtooth Software, Inc. (201,565 points)
Thank you Bryan. I agree on dependency and imbalance you mentioned.
I understand the idea you suggested. Does it mean I have to run dozens of choice models at the  stage #1 - a separate MNL model for each checkbox shown? if yes then dozens and dozens of models. Not to mention any cross effects  which could turn dozens of models into hundreds of them...

I was trying to avoid all this burden by using my approach I initially described which I agree is not based on orthogonal introduction of attributes.

I even was thinking if I can evaluate that bias by shuffling "dichotomized appeal" responses in the data (thus destroying correlation of predictors to it and then running MNL model against this shuffled dependent variable. In an ideal experiment with proper design the above manipulation would end up in zero or close to zero utilities.
But consistent deviations of resulting utilities from zeroes would indicate / estimate effect of bias brought by  predictors' dependence and imbalance. Then somehow to use these bias estimates to blend with / correct main MNL model.
If you take the approach that each item on the menu that could be chosen is a dependent variable, then indeed there is an MNL model per dependent variable.  Cross-effects do not create more models, just more parameters to be estimated within each model.   There are other ways to go about MBC modeling that don't involve a separate binary logit model per item on the menu, which I described in my conference paper: http://www.sawtoothsoftware.com/download/techpap/mbcconf2010.pdf

One of the models I describe may be done with a single MNL model (referred to as the "Volumetric CBC Model") in that paper.  It's a simplification, but if you want to make a more parsimonious model for that first stage of choice, it could make life easier for you.
This is a brilliant idea, Bryan. Thank you!
Have you explored anchoring Maximum Volume to something other than clicks (check-boxed items) as described in your paper? For instance to total dollars spent in the task?

I feel max dollar amount across tasks might be close to the sense of the cap a respondent might be thinking in their mind when they select items.

For example, 10 of $3.00 items selected on some screen doesn't mean 10 is valid threshold or MEV for all other screens with more expensive items offered.
I've only tried treating the dependent variable as #items selected, not total dollars spent in the task, because the total dollars spent is directly related to the pricing variable (which is supposed to be an independent variable).  I capture the relationship between the independent variables (price, plus the alternative-specific constants for the items) and choice probability.
Thank you Bryan. The paper suggests taking theoretical max among what was shown. The complexity is that the number of menu items shown varies from task to task. I am not sure that selecting 5 items from 10 shown is any better (should be coded as having better shares) than selecting the same 5  items but now out of 15 shown on the next screen. So simple adding irrelevant items to show in a specific task might penalize the shares of what respondent likes and selects.
I think it probably should work out fine if the theoretical max was the largest number of items that could be checked if the greatest number of items that were shown in a task (given your availability design) was actually shown.  Remember to include the availability dummy code per item in the model specification.  So, for each item you have its alternative-specific constant, its price coefficient, and its availability coefficient.