next up previous contents index
Next: 2.3 Stochastic Volatility and Up: 2. Econometrics Previous: 2.1 Introduction

Subsections



2.2 Limited Dependent Variable Models

This section deals with models in which the dependent variable is discrete. Many interesting problems like labour force participation, presidential voting, transport mode choice and brand choice are discrete in nature. In particular, we consider discrete choice models in the case where panel data are available. This allows, for example, to follow individuals with their choices over time, so that richer behavioural models can be constructed. Although the number of parameters in these models does not necessarily increase, the likelihood function, and therefore estimation, becomes more complex. In this section we describe the multinomial multiperiod probit, the multivariate probit and the mixed multinomial logit model. Examples are given.

We refer to [37] for a general introduction to limited dependent and qualitative variables in econometrics and to [22] for a basic introduction motivating such models in relation to marketing.


2.2.1 Multinomial Multiperiod Probit

2.2.1.1 Definition

Denote by $ U_{ijt}$ the unobserved utility perceived by individual $ i$ who chooses alternative $ j$ at time $ t$. This utility may be modelled as follows

$\displaystyle U_{ijt} = \boldsymbol{X}_{ijt}^T \boldsymbol{\beta} + \epsilon_{ijt}\;,$ (2.1)

where $ i=1,\ldots, I$, $ j=1,\ldots,J$, $ t=1,\ldots,T_i$, $ \boldsymbol{X}_{ijt}$ is a $ k$-dimensional vector of explanatory variables, $ \boldsymbol {\beta }$ is a $ k$-dimensional parameter vector and $ \epsilon_{ijt}$ is a random shock known to individual $ i$. This individual chooses alternative $ j$ in period $ t$ if

$\displaystyle U_{ijt} > U_{imt} \quad \forall j \ne m\;.$ (2.2)

We observe $ \boldsymbol{d}_i=(d_{i1},\ldots,d_{iT_i})^T$ where $ d_{it} =
j$ if individual $ i$ chooses alternative $ j$ at time $ t$. We suppose that there is always only one choice by each individual at each period, i.e. choices are mutually exclusive. The multinomial multiperiod probit model is obtained by assuming

$\displaystyle \boldsymbol{\epsilon}_{i}=(\epsilon_{i11},\ldots,\epsilon_{iJ1}, ...
...1T_i}, \ldots,\epsilon_{iJT_i})^T
 \sim \mathrm{IIDN}(0,\boldsymbol{\Sigma})\;.$ (2.3)

Consequently,

$\displaystyle P_i$ $\displaystyle = \mathrm{P}( d_{i} ) =\mathrm{P} \left(\bigcap_{m \ne
 d_{it}}\bigcap_{t=1}^{T_i} \;U_{i,d_{it},t}>U_{imt} \right)$    
  $\displaystyle = \mathrm{P} \left(\bigcap_{m \ne d_{it}}\bigcap_{t=1}^{T_i}
 \;\...
...bol{X}_{imt} -
 \boldsymbol{X}_{i,d_{it},t})^T
 \boldsymbol{\beta} \right)\;,{}$ (2.4)

which is a $ (T_i \times J)$-variate integral. However, since individual choices are based on utility comparisons, it is conventional to work in utility differences relative to alternative $ J$. If we multiply the utilities in (2.1) by a constant, we see that the probability event in (2.4) is invariant, thus a different scaling of the utilities does not alter the choices of the individuals. The rescaled relative utility is then defined as

$\displaystyle \widetilde{U}_{ijt}$ $\displaystyle =
 (U_{ijt}-U_{iJt})(\sigma_{11}+\sigma_{JJ}-2\sigma_{1J})^{-1/2}$    
  $\displaystyle = \left((\boldsymbol{X}_{ijt} - \boldsymbol{X}_{iJt})^T \boldsymb...
...on_{ijt} - \epsilon_{iJt}\right)
 (\sigma_{11}+\sigma_{JJ}-2\sigma_{1J})^{-1/2}$    
  $\displaystyle = \widetilde{\boldsymbol{X}}_{ijt}^{T} \boldsymbol{\beta} + \widetilde{\epsilon}_{ijt}\;.$ (2.5)

An individual chooses alternative $ j$ in period $ t$ if

$\displaystyle \widetilde{U}_{ijt} > \widetilde{U}_{imt} \quad \forall j \ne m\;.$ (2.6)

As an identification restriction, one usually imposes a unit variance for the last alternative expressed in utility differences. Define

$\displaystyle \widetilde{\boldsymbol{\epsilon}}_{i}=(\widetilde{\epsilon}_{i11}...
...silon}_{i,J-1,T_i})^T
 \sim \mathrm{IIDN}(0,\widetilde{\boldsymbol{\Sigma}})\;,$ (2.7)

where $ \widetilde {\boldsymbol{\Sigma }}$ is the transformed $ \boldsymbol{\Sigma}$ with $ \widetilde{\sigma}_{J-1,J-1}=1$, so that (2.4) becomes

$\displaystyle P_i = \mathrm{P} \left(\bigcap_{m \ne d_{it}}\bigcap_{t=1}^{T_i}
...
...t} - 
 \widetilde{\boldsymbol{X}}_{i,d_{it},t})^T \boldsymbol{\beta} \right)\;,$ (2.8)

which is a $ T_i(J-1)$-variate integral. Note that when the $ \widetilde{\epsilon}_{ijt}$'s are serially uncorrelated, this probability event can be calculated by the product of $ T_i$ integrals of dimension $ J-1$, which is easier to compute but this rules out interesting cases, see the applications below.


2.2.1.2 Estimation

This section briefly explains how the multinomial multiperiod probit model can be estimated in the classical or Bayesian framework. More details can be found in [25].

2.2.1.2.1 Classical Estimation

Since we assume independent observations on individuals the likelihood is

$\displaystyle Pr (\boldsymbol{d} \mid \boldsymbol{X},\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}}) = \prod_{i=1}^{I} P_i\;,$ (2.9)

where $ \boldsymbol{d}=(\boldsymbol{d}_1,\ldots,\boldsymbol{d}_I)$ and $ \boldsymbol {X}$ denotes all the observations on the explanatory variables. Evaluation of this likelihood is infeasible for reasonable values of $ T_i$ and $ J$. Classical maximum likelihood estimation methods are usually, except in some trivial cases, based on numerical search algorithms that require many times the evaluation of the likelihood function and are therefore not suitable for this model. For more information on classical estimation, see [29], [27] and [28].

Alternative estimation methods are based on simulations of the choice probabilities. The simulated maximum likelihood (SML) method maximizes the simulated likelihood which is obtained by substituting the simulated choice probabilities in (2.9). The method of simulated moments is a simulation based substitute for the generalized method of moments. For further information on these estimation methods we refer to [27].

2.2.1.2.2 Bayesian Inference

The posterior density is

$\displaystyle \varphi (\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}} \mid...
...{\Sigma}}) \,
 \varphi (\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}})\;,$ (2.10)

where $ \varphi (\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}})$ is the prior density. This does not solve the problem of evaluating a high dimensional integral in the likelihood and it remains hard to compute posterior means for example. Data augmentation, see for example [49], provides a solution because this technique allows to set up a Gibbs sampling scheme using distributions that are easy to draw from. The idea is to augment the parameter vector with $ \widetilde{\boldsymbol{U}}$, the latent utilities, so that the posterior density in (2.10) changes to

$\displaystyle \varphi (\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}}, \wi...
...ol{\Sigma}}) \, 
 \varphi (\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}})$ (2.11)

implying three blocks in the Gibbs sampler: $ \varphi(\boldsymbol{\beta} \mid \widetilde{\boldsymbol{\Sigma}},
\widetilde{\boldsymbol{U}}, \boldsymbol{d}, \boldsymbol{X})$, $ \varphi
(\widetilde{\boldsymbol{\Sigma}} \mid \boldsymbol{\beta}, \widetilde{\boldsymbol{U}},
\boldsymbol{d}, \boldsymbol{X})$ and $ \varphi (\widetilde{\boldsymbol{U}} \mid
\boldsymbol{\beta}, \widetilde{\boldsymbol{\Sigma}}, \boldsymbol{d}, \boldsymbol{X})$. For more details on the Gibbs sampler we refer to Chaps. II.3 and III.11. For the first two blocks, the model in (2.5) is the conventional regression model since the utilities, once simulated, are observed. For the last block, remark that $ Pr (\boldsymbol{d} \mid \boldsymbol{X}, \boldsymbol{\beta},
\widetilde{\boldsymbol{\Sigma}},\widetilde{\boldsymbol{U}})$ is an indicator function since $ \widetilde{\boldsymbol{U}}$ is consistent with $ \boldsymbol{d}$ or not.

2.2.1.3 Applications

It is possible to extend the model in (2.5) in various ways, such as alternative specific $ \boldsymbol {\beta }$'s, individual heterogeneity or a dynamic specification.

[41] propose a dynamic specification

$\displaystyle \rm\Delta 
 \widetilde{\boldsymbol{U}}_{it} = 
 \rm\Delta \wideti...
...l{X}}_{i,t-1}(\boldsymbol{\beta} +
 \boldsymbol{\beta}_i)\right) + \eta_{it}\;,$ (2.12)

where $ \widetilde{\boldsymbol{U}}_{it}$ is the $ (J-1)$-dimensional vector of utilities of individual $ i$, $ \rm\Delta \widetilde{\boldsymbol{U}}_{it}=
\widetilde{\boldsymbol{U}}_{it}-\widetilde{\boldsymbol{U}}_{i,t-1}$, $ \widetilde{\boldsymbol{X}}_{i,t-1}$ and $ \rm\Delta \widetilde{\boldsymbol{X}}_{it}$ are matrices of dimension $ (J-1)\times k$ for the explanatory variables, $ \boldsymbol {\alpha }$ and $ \boldsymbol {\beta }$ are $ k$-dimensional parameter vectors, $ \boldsymbol{\Pi}$ is a  $ (J-1)\times (J-1)$ parameter matrix with eigenvalues inside the unit circle, $ \boldsymbol{\eta}_{it}
\sim N(0,\widetilde{\boldsymbol{\Sigma}})$, and $ \boldsymbol{\alpha}_{i}$ and $ \boldsymbol{\beta}_{i}$ are random individual effects with the same dimension as $ \boldsymbol {\alpha }$ and $ \boldsymbol {\beta }$. These individual heterogeneity effects are assumed to be normally distributed: $ \boldsymbol{\alpha}_{i}
\sim N(0, \boldsymbol{\Sigma}_{\boldsymbol{\alpha}})$ and $ \boldsymbol{\beta}_{i}
\sim N(0, \boldsymbol{\Sigma}_{\boldsymbol{\beta}})$. The specification in (2.12) is a vector error-correction model where the parameters $ \boldsymbol{\alpha}+\boldsymbol{\alpha}_{i}$ and $ \boldsymbol{\beta}+\boldsymbol{\beta}_{i}$ measure respectively the short-run and long-run effects. The parameters in $ \boldsymbol{\Pi}$ determine the speed at which deviations from the long-run relationship are adjusted.

The model parameters are $ \boldsymbol{\beta},\boldsymbol{\alpha},\widetilde{\boldsymbol{\Sigma}},\boldsy...
...ldsymbol{\Sigma}_{\boldsymbol{\beta}},\boldsymbol{\Sigma}_{\boldsymbol{\alpha}}$ and $ \boldsymbol{\Pi}$ and are augmented by the latent utilities $ \widetilde{U}_{it}$. Bayesian inference may be done by Gibbs sampling as described in the estimation part above. Table 2.1 describes for each of the nine blocks which posterior distribution is used. For example, $ \beta$ has a conditional (on all other parameters) posterior density that is normal.


Table 2.1: Summary of conditional posteriors for (2.12)
Parameter Conditional posterior
$ \boldsymbol{\beta}, \boldsymbol{\beta}_i, \boldsymbol{\alpha}, \boldsymbol{\alpha}_i$ Multivariate normal distributions
$ \widetilde{\boldsymbol{\Sigma}}, \boldsymbol{\Sigma}_{\boldsymbol{\alpha}},
\boldsymbol{\Sigma}_{\boldsymbol{\beta}}$ Inverted Wishart distributions
$ \boldsymbol{\Pi}$ Matrix normal distribution
$ \boldsymbol{\widetilde{U}}_{it}$ Truncated multivariate normal

As an illustration we reproduce the results of [41], who provided their Gauss code (which we slightly modified). They use optical scanner data on purchases of four brands of saltine crackers. [13] use the same data set to estimate a static multinomial probit model. The data set contains all purchases (choices) of crackers of $ 136$ households over a period of two years, yielding $ 3292$ observations. Variables such as prices of the brands and whether there was a display and/or newspaper feature of the considered brands at the time of purchase are also observed and used as the explanatory variables forming $ \boldsymbol{X}_{ijt}$ (and then transformed into $ \widetilde{\boldsymbol{X}}_{ijt}$). Table 2.2 gives the means of these variables. Display and Feature are dummy variables, e.g. Sunshine was displayed $ 13\,{\%}$ and was featured $ 4\,{\%}$ of the purchase occasions. The average market shares reflect the observed individual choices, with e.g. $ 7\,{\%}$ of the choices on Sunshine.


Table 2.2: Means of $ X_{it}$ variables in (2.12)
  Sunshine Keebler Nabisco Private Label
Market share $ 0.07$ $ 0.07$ $ 0.54$ $ 0.32$
Display $ 0.13$ $ 0.11$ $ 0.34$ $ 0.10$
Feature $ 0.04$ $ 0.04$ $ 0.09$ $ 0.05$
Price $ 0.96$ $ 1.13$ $ 1.08$ $ 0.68$

Table 2.3 shows posterior means and standard deviations for the $ \boldsymbol {\alpha }$ and $ \boldsymbol {\beta }$ parameters. They are computed from $ 50{,}000$ draws after dropping $ 20{,}000$ initial draws. The prior on $ \widetilde {\boldsymbol{\Sigma }}$ is inverted Wishart, denoted by $ IW(\boldsymbol{S},\nu)$, with $ \nu=10$ and $ \boldsymbol{S}$ chosen such that $ E(\widetilde{\boldsymbol{\Sigma}})=\boldsymbol{I}_3$. Note that [41] use a prior such that $ E(\widetilde{ \boldsymbol{\Sigma}}^{-1})=\boldsymbol{I}_3$. For the other parameters we put uninformative priors. As expected, Display and Feature have positive effects on the choice probabilities and price has a negative effect. This holds both in the short run and the long run. With respect to the private label (which serves as reference category), the posterior means of the intercepts are positive except for the first label whose intercept is imprecisely estimated.


Table 2.3: Posterior moments of $ \boldsymbol {\beta }$ and $ \boldsymbol {\alpha }$ in (2.12)
  $ \boldsymbol {\beta }$ parameter $ \boldsymbol {\alpha }$ parameter   Intercepts
  mean st. dev. mean st. dev.   mean st. dev.
Display $ 0.307$ ($ 0.136$) $ 0.102$ ($ 0.076$) Sunshine $ {-0.071}$ ($ 0.253$)
Feature $ 0.353$ ($ 0.244$) $ 0.234$ ($ 0.090$) Keebler $ 0.512$ ($ 0.212$)
Price $ -1.711$ ($ 0.426$) $ -2.226$ ($ 0.344$) Nabisco $ 1.579$ ($ 0.354$)

Table 2.4 gives the posterior means and standard deviations of $ \widetilde {\boldsymbol{\Sigma }}$, $ \boldsymbol{\Pi}$, $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\beta }}$ and $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\alpha }}$. Note that the reported last element of $ \widetilde {\boldsymbol{\Sigma }}$ is equal to $ 1$ in order to identify the model. This is done, after running the Gibbs sampler with $ \widetilde {\boldsymbol{\Sigma }}$ unrestricted, by dividing the variance related parameter draws by $ \widetilde{\boldsymbol{\sigma}}_{J-1,J-1}$. The other parameter draws are divided by the square root of the same quantity. [39] propose an alternative approach where $ \widetilde{\boldsymbol{\Sigma}}_{J-1,J-1}$ is fixed to $ 1$ by construction, i.e. a fully identified parameter approach. They write

$\displaystyle \widetilde{\boldsymbol{\Sigma}} =
 \begin{pmatrix}
 \boldsymbol{\...
...\gamma}^T & \boldsymbol{\gamma}\\ 
 \boldsymbol{\gamma}^T & 1\\ 
 \end{pmatrix}$ (2.13)

and show that the conditional posterior of $ \boldsymbol{\gamma }$ is normal and that of $ \boldsymbol{\Phi}$ is Wishart, so that draws of $ \widetilde {\boldsymbol{\Sigma }}$ are easily obtained. This approach is of particular interest when a sufficiently informative prior on $ \widetilde {\boldsymbol{\Sigma }}$ is used. A drawback of this approach is that the Gibbs sampler has higher autocorrelation and that it is more sensitive to initial conditions.


Table 2.4: Posterior means and standard deviations of $ \widetilde {\boldsymbol{\Sigma }}$, $ \Pi $, $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\beta }}$ and $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\alpha }}$ in (2.12)
$ \widetilde {\boldsymbol{\Sigma }}$ $ \boldsymbol{\Pi}$
\begin{displaymath}\left(
\begin{array}{ccc}
0.563 & -0.102 & 0.433\\
(0.179...
...3\\
& (0.119) & (0.069)\\
& & 1\\
\end{array}
\right) \end{displaymath} \begin{displaymath}\left(
\begin{array}{ccc}
0.474 & 0.213 & 0.054\\
(0.103)...
...421\\
(0.091) & (0.138) & (0.087)\\
\end{array}
\right)
\end{displaymath}
   
$ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\beta }}$ $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\alpha }}$
\begin{displaymath}\left(
\begin{array}{cccccc}
0.431 & -0.267 & 0.335 & -0.17...
...& & & & & 4.915\\
& & & & & (1.319)\\
\end{array}
\right)\end{displaymath} \begin{displaymath}\left(
\begin{array}{ccc}
0.207 & -0.023 & -0.004\\
(0.09...
....366)\\
& & 6.672\\
& & (2.453)\\
\end{array}
\right) \end{displaymath}

The relatively large posterior means of the diagonal elements of  $ \boldsymbol{\Pi}$ show that there is persistence in brand choice. The matrices $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\beta }}$ and $ \widetilde{\Sigma}_{\boldsymbol{\alpha}}$ measure the unobserved heterogeneity. There seems to be substantial heterogeneity across the individuals, especially for the price of the products (see the third diagonal elements of both matrices). The last three elements in $ \widetilde {\boldsymbol{\Sigma }}_{\boldsymbol{\beta }}$ are related to the intercepts.

The multinomial probit model is frequently used for marketing purposes. For example, [1] use ketchup purchase data to emphasize the importance of a detailed understanding of the distribution of consumer heterogeneity and identification of preferences at the customer level. In fact, the disaggregate nature of many marketing decisions creates the need for models of consumer heterogeneity which pool data across individuals while allowing for the analysis of individual model parameters. The Bayesian approach is particularly suited for that, contrary to classical approaches that yields only aggregate summaries of heterogeneity.


2.2.2 Multivariate Probit

The multivariate probit model relaxes the assumption that choices are mutually exclusive, as in the multinomial model discussed before. In that case, $ d_i$ may contain several $ 1$'s. [10] discuss classical and Bayesian inference for this model. They also provide examples on voting behavior, on health effects of air pollution and on labour force participation.


2.2.3 Mixed Multinomial Logit

2.2.3.1 Definition

The multinomial logit model is defined as in (2.1), except that the random shock $ \epsilon_{ijt}$ is extreme value (or Gumbel) distributed. This gives rise to the independence from irrelevant alternatives (IIA) property which essentially means that $ {\mathrm{Cov}}\,(U_{ijt},U_{ikt})=0 \; \forall j, \; \forall k$. Like the probit model, the mixed multinomial logit (MMNL) model alleviates this restrictive IIA property by treating the $ \boldsymbol {\beta }$ parameter as a random vector with density $ f_{\boldsymbol{\theta}} (\boldsymbol{\beta})$. The latter density is called the mixing density and is usually assumed to be a normal, lognormal, triangular or uniform distribution. To make clear why this model does not suffer from the IIA property, consider the following example. Suppose that there is only explanatory variable and that $ \beta \sim N (\bar{\beta},
\bar{\sigma}^2)$. We can then write (2.1) as

$\displaystyle U_{ijt}$ $\displaystyle = X_{ijt}\bar{\beta} + X_{ijt} \bar{\sigma} z + \epsilon_{ijt}$ (2.14)
  $\displaystyle = X_{ijt}\bar{\beta} + \epsilon_{ijt}^{\ast}\;,$    

where $ z \sim N(0,1)$, implying that the variance of $ \epsilon_{ijt}^{\ast}$ depends on the explanatory variable and that there is nonzero covariance between utilities for different alternatives.

The mixed logit probability is given by

$\displaystyle P_i = \int \prod_{t=1}^{T_i} \left( \frac{\mathrm{e}^{\boldsymbol...
...{\boldsymbol{\theta}} (\boldsymbol{\beta}) \,{\mathrm{d}} \boldsymbol{\beta}\;,$ (2.15)

where the term between brackets is the logistic distribution arising from the difference between two extreme value distributions. The model parameter is  $ \boldsymbol {\theta }$. Note that one may want to keep elements of $ \boldsymbol {\beta }$ fixed as in the usual logit model. One usually keeps random the elements of $ \boldsymbol {\beta }$ corresponding to the variables that are believed to create correlation between alternatives. The mixed logit model is quite general. [40] demonstrate that any random utility model can be approximated to any degree of accuracy by a mixed logit with appropriate choice of variables and mixing distribution.

2.2.3.2 Estimation

2.2.3.2.1 Classical Estimation

Estimation of the MMNL model can be done by SML or the method of simulated moments or simulated scores. To do this, the logit probability in (2.15) is replaced by its simulated counterpart

$\displaystyle SP_i = \frac{1}{R} \sum_{r=1}^{R} \prod_{t=1}^{T_i} \left(
 \frac...
...m_{j=1}^J
 \mathrm{e}^{\boldsymbol{X}_{ijt}^T \boldsymbol{\beta}^r}} \right)\;,$ (2.16)

where the $ \{\boldsymbol{\beta}^r\}_{r=1}^R$ are i.i.d. draws of $ f_{\boldsymbol{\theta}} (\boldsymbol{\beta})$. The simulated likelihood is the product of all the individual $ SP_i$'s. The simulated log-likelihood can be maximized with respect to $ \boldsymbol {\theta }$ using numerical optimization techniques like the Newton-Raphson algorithm. To avoid an erratic behaviour of the simulated objective function for different values of $ \boldsymbol {\theta }$, the same sequences of basic random numbers is used to generate the sequence $ \{\boldsymbol{\beta}^r\}$ used during all the iterations of the optimizer (this is referred to as the technique of `common random numbers').

According to [27] the SML estimator is asymptotically equivalent to the ML estimator if $ T$ (the total number of observations) and $ R$ both tend to infinity and $ \sqrt{T}/R\rightarrow 0$. In practice, it is sufficient to fix $ R$ at a moderate value.

The approximation of an integral like in (2.15) by the use of pseudo-random numbers may be questioned. [6] implements an alternative quasi-random SML method which uses quasi-random numbers. Like pseudo-random sequences, quasi-random sequences, such as Halton sequences, are deterministic, but they are more uniformly distributed in the domain of integration than pseudo-random ones. The numerical experiments indicate that the quasi-random method provides considerably better accuracy with much fewer draws and computational time than does the usual random method.

2.2.3.2.2 Bayesian Inference

Let us suppose that the mixing distribution is Gaussian, that is, the vector  $ \boldsymbol {\beta }$ is normally distributed with mean  $ \boldsymbol{b}$ and variance matrix  $ \boldsymbol{W}$. The posterior density for $ I$ individuals can be written as

$\displaystyle \varphi (\boldsymbol{b}, \boldsymbol{W} \mid \boldsymbol{d}, \bol...
...\boldsymbol{b}, \boldsymbol{W}) \, \varphi (\boldsymbol{b},
 \boldsymbol{W})\;,$ (2.17)

where $ Pr (\boldsymbol{d} \mid \boldsymbol{X}, \boldsymbol{b}, \boldsymbol{W}) =
\prod_{i=1}^I P_i$ and $ \varphi (\boldsymbol{b}, \boldsymbol{W})$ is the prior density on $ \boldsymbol{b}$ and  $ \boldsymbol{W}$. Sampling from (2.17) is difficult because $ P_i$ is an integral without a closed form as discussed above. We would like to condition on  $ \boldsymbol {\beta }$ such that the choice probabilities are easy to calculate. For this purpose we augment the model parameter vector with  $ \boldsymbol {\beta }$. It is convenient to write $ \boldsymbol{\beta}_i$ instead of  $ \boldsymbol {\beta }$ to interpret the random coefficients as representing heterogeneity among individuals. The $ \boldsymbol{\beta}_i$'s are independent and identically distributed with mixing distribution $ f(\cdot \mid \boldsymbol{b}, \boldsymbol{W})$. The posterior can then be written as

$\displaystyle \varphi (\boldsymbol{b}, \boldsymbol{W}, \boldsymbol{\beta}_I \mi...
...\boldsymbol{b}, \boldsymbol{W}) \, \varphi (\boldsymbol{b},
 \boldsymbol{W})\;,$ (2.18)

where $ \boldsymbol{\beta}_I$ collects the $ \boldsymbol{\beta}_i$'s for all the $ I$ individuals. Draws from this posterior density can be obtained by using the Gibbs sampler. Table 2.5 summarizes the three blocks of the sampler.


Table 2.5: Summary of conditional posteriors for MMNL model
Parameter Conditional posterior or sampling method
$ \boldsymbol{b}$ Multivariate normal distribution
$ \boldsymbol{W}$ Inverted Wishart distribution
$ \boldsymbol{\beta}_I$ Metropolis Hastings algortihm

For the first two blocks the conditional posterior densities are known and are easy to sample from. The last block is more difficult. To sample from this density, a Metropolis Hastings (MH) algorithm is set up. Note that only one iteration is necessary such that simulation within the Gibbs sampler is avoided. See [50], Chap. 12, for a detailed description of the MH algorithm for the mixed logit model and for guidelines about how to deal with other mixing densities. More general information on the MH algorithm can be found in Chap. II.3.

Bayesian inference in the mixed logit model is called hierarchical Bayes because of the hierarchy of parameters. At the first level, there are the individual parameters  $ \boldsymbol{\beta}_i$ which are distributed with mean  $ \boldsymbol {\beta }$ and variance matrix  $ \boldsymbol{W}$. The latter are called hyper-parameters, on which we have also prior densities. They form the second level of the hierarchy.

2.2.3.3 Application

We reproduce the results of [40] using their Gauss code available on the web site elsa.berkeley.edu/$ \sim$train/software.html. They analyse the demand for alternative vehicles. There are $ 4654$ respondents who choose among six alternatives (two alternatives run on electricity only). There are $ 21$ explanatory variables among which $ 4$ are considered to have a random effect. The mixing distributions for these random coefficients are independent normal distributions. The model is estimated by SML and uses $ R=250$ replications per observation. Table 2.6 includes partly the estimation results of the MMNL model. We report the estimates and standard errors of the parameters of the normal mixing distributions, but we do not report the estimates of the fixed effect parameters corresponding to the $ 17$ other explanatory variables. For example, the luggage space error component induces greater covariance in the stochastic part of utility for pairs of vehicles with greater luggage space. We refer to [40] or [8] for more interpretations of the results.

[50] provides more information and pedagogical examples on the mixed multinomial model.


Table 2.6: SML estimates of MMNL random effect parameters
Variable Mean Standard deviation
Electric vehicle (EV) dummy $ -1.032$ ($ 0.503$) $ 2.466$ ($ 0.720$)
Compressed natural gass (CNG) dummy $ 0.626$ ($ 0.167$) $ 1.072$ ($ 0.411$)
Size $ 1.435$ ($ 0.499$) $ 7.457$ ($ 2.043$)
Luggage space $ 1.702$ ($ 0.586$) $ 5.998$ ($ 1.664$)

Robust standard errors within parentheses


next up previous contents index
Next: 2.3 Stochastic Volatility and Up: 2. Econometrics Previous: 2.1 Introduction