Skip to main content

Vassilis A. Hajivassiliou Publications

Publish Date
Abstract

In this paper we propose a modelling approach for labor supply and consumption decisions that is firmly grounded within a utility maximizing framework and allows for a role of such institutional constraints as limited access to borrowing and involuntary unemployment. We report estimations for a system of dynamic probit models with data from the Panel Study of Income Dynamics. These estimations test broad predictions of the theoretical model.

One of our models describes a household’s propensity to be liquidity constrained in a given period. The second is a dynamic ordered probit model for a labor constraint indicator describing qualitative aspects of the conditions of employment, that is whether the household head is involuntarily overemployed, voluntarily employed, or involuntarily underemployed or unemployed. These models are estimated separately as well as jointly. Our results provide strong support for the basic theory of constrained behavior and the interaction between liquidity constraints and exogenous constraints on labor supply.

Keywords: Intertemporal optimization, Quantity constraints, Liquidity constraints, Unemployment, Dynamic probit models, Simulation estimation

JEL Classification: D91, E24, C61, C33, C35

Abstract

In this paper, I first show how aggregation over submarkets that exhibit varying degrees of disequilibrium can provide a foundation to the classic “short-side” disequilibrium econometric model of Fair and Jaffee [11]. I then introduce explicit randomness in the aggregative model as arising from economy-wide demand and supply shocks, which are allowed to be serially correlated. I develop suitable simulation estimation methods to circumvent hitherto intractable computational problems resulting from serial correlation in the unobservables in disequilibrium analysis. I show that the introduction of macroeconomic shocks has fundamentally different implications compared to the traditional approach that arbitrarily appends an additive disturbance term to the basic equation of the model.

The aggregative disequilibrium model with macroeconomic shocks is estimated from a set of quarterly observations on the labor market in US manufacturing. A major finding is that the introduction of macroeconomic shocks is able to explain a large part of the residual serial correlation that was plaguing traditional studies. Moreover, the new modelling technique yields considerably more satisfactory estimates of the supply side of the markets.

Keywords: Disequilibrium, Aggregation, Simulation estimation methods, Dynamic limmited dependent variable models, Labor markets

JEL Classification: 210, 820

Journal of Applied Econometrics
Abstract

In this paper I develop models of the incidence and extent of external financing crises of developing countries, which lead to multiperiod multinomial discrete choice and discrete/continuous econometric specifications with flexible correlation structures in the unobservables. I show that estimation of these models based on simulation methods has attractive statistical properties and is computationally tractable. Three such simulation estimation methods are exposited, analyzed theoretically, and used in practice: a method of smoothly simulation maximum likelihood (SSML) based on a smooth recursive conditioning simulator (SRC), a method of simulated scores (MSS) based on a Gibbs sampling simulator (GSS), and an MSS estimator based on the SRC simulator.

The data set used in this study comprises 93 developing countries observed through the 1970–1988 period and contains information on external financing responses that are not available to investigators in the past. Moreover, previous studies of external debt problems had to rely on restrictive correlation structures in the unobservables to overcome otherwise intractable computational difficulties. The findings show that being able for the first time to allow for flexible correlation patterns in the unobservables through estimation by simulation has a substantial impact on the parameter estimates obtained from such models. This suggests that past empirical results in this literature require a substantial reevaluation.

Keywords: Simulation estimation, Maximum simulated likelihood, Simulated scores, Gibbs sampling, External debt crises

Abstract

An extensive literature in econometrics and in numerical analysis has considered the computationally difficult problem of evaluating the multiple integral representing the probability of a multivariate normal random vector constrained to lie in a rectangular region. A leading case of such an integral is the negative orthant probability, implied by the multinomial probit (MNP) model used in econometrics and biometrics. Classical parametric estimation of this model requires, for each trial parameter vector and each observation in a sample, evaluation of a normal orthant probability and its derivatives with respect to the mean vector and the variance-covariance matrix. Several Monte Carlo simulators have been developed to approximate the orthant probability integral and its linear and logarithmic derivatives that limit computation while possessing properties that facilitate their use in iterative calculations for statistical inference. In this paper, I discuss Gauss and FORTRAN implementations of 13 simulation algorithms, and I present results on the impact of vectorization on the relative computational performance of the simulation algorithms. I show that the 13 simulators differ greatly with respect to the degree of vectorizability: in some cases activating the CRAY-Y/MP4 vector facility achieves a speed-up factor in excess of 10 times, while in others the gains in speed are negligible. Evaluating the algorithms in terms of lowest simulation root-mean-squared-error for given computation time, I find that (1) GHK, an importance sampling recursive triangularization simulator, remains the best method for simulating probabilities irrespective of vectorization; (2) the crude Monte Carlo simulator CFS offers the greatest benefits from vectorization; and (3) the GSS algorithm, based on “Gibbs resampling,” emerges as one of the preferred methods for simulating logarithmic derivatives, especially in the absence of vectorization.

Abstract

This paper discusses estimation methods for limited dependent variable (LDV) models that employ Monte Carlo simulation techniques to overcome computational problems in such models. These difficulties take the form of high dimensional integrals that need to be calculated repeatedly but cannot be easily approximated by series expansions. In the past, investigators were forced to restrict attention to special classes of LDV models that are computationally manageable. The simulation estimation methods we discuss here make it possible to estimate LDV models that are computationally intractable using classical estimation methods.

We first review the ways in which LDV models arise, describing the differences and similarities in censored and truncated data generating processes. Censoring and truncation give rise to the troublesome multivariate integrals. Following the LDV models, we described various simulation methods for evaluating such integrals. Naturally, censoring and truncation play roles in simulation as well. Finally, estimation methods that rely on simulation are described. We review three general approaches that combine estimation of LDV models and simulation: simulation of the log-likelihood function (MLS), simulation of moment functions (MSM), and simulation of the score (MSS). The MSS is a combination of ideas from MSL and MSM, treading the efficient score of the log-likelihood function as a moment function.

We use the rank ordered probit model as an illustrative example to investigate the comparative properties of these simulation estimation approaches.

Abstract

An extensive literature in econometrics and in numerical analysis has considered the problem of evaluating the multiple integral P(B; µ, Ω) = Integralab n(v - µ, Ω)dv = EV1(V c B), where V is a m-dimensional normal vector with mean µ, covariance matrix Ω, and density n(v - µ, Ω) and 1(V c B) is an indicator for the event B = {V|a < V < b}. A leading case of such an integral is the negative orthant probability, where B = {v|v < 0}. The problem is computationally difficult except in very special cases. The multinomial probit (MNP) model used in econometrics and biometrics has cell probabilities that are negative orthant probabilities, with µ and depending on unknown parameters (and, in general, on covariates). Estimation of this model requires, for each trial parameter vector and each observation in a sample, evaluation of P(µ;B) and of its derivatives with respect to µ and Ω. This paper surveys Monte Carlo techniques that have been developed for approximations of P(µ;Ω) and its linear and logarithmic derivatives that limit computation while possessing properties that facilitate their use in iterative calculations for statistical inference: the Crude Frequency Simulator (CFS), Normal Importance Sampling (NIS), a Kernel-Smoothed Frequency Simulator (KFS), Stern’s Decomposition Simulator (SDS), the Geweke–Hajivassiliou–Keane Simulator (GHK), a Parabolic Cylinder Function Simulator (PCF), Deák’s Chi-squared Simulator (DCS), an Acceptance/Rejection Simulator (ARS), the Gibbs Sampler Simulator (GSS), a Sequentially Unbiased Simulator (SUS), and an Approximately Unbiased Simulator (AUS). We also discuss Gauss and FORTRAN implementations of these algorithms and present our computational experience with them. We find that GHK is overall the most reliable method.

Abstract

This paper considers a dual approach to the problem of maximizing lifetime utility subject to liquidity constraints in a discrete time setting. These constraints prohibit the decision maker from borrowing against future endowment income. The dual approach allows us to exploit directly the supermartingale property of the marginal utility of expenditure and to establish existence and uniqueness of the optimal solution. The optimal solution is interpreted as deriving from a version of the problem that is subject to a single lifetime budget constraint, where expenditures and incomes are discounted to the beginning of the horizon by means of individualized Arrow-Debreu prices.

JE Classification: D91, D81

Keywords: Consumer choice, duality theory

Abstract

This chapter discusses simulation estimation methods that overcome the computational intractability of classical estimation of limited dependent variable models with flexible correlation structures in the unobservable stochastic terms. These difficulties arise because of the need to evaluate accurately very high dimensional integrals. The methods based on simulation do not require the exact evaluation of these integrals and hence are feasible using computers of even moderate power. I first discuss a series of ideas that had been used in efforts to circumvent these computational problems by employing standard numerical analysis approximation methods. I then show how simulation techniques solve the computational problems without the need to resort to either generally unsatisfactory numerical approximations. All currently know simulation algorithms are then compared in terms of theoretical properties and practical performance.

JEL Classification: C13, C15, C51

Keywords: Simulation methods, econometric modeling, model estimation, correlation structures

Abstract

The method of simulated scores (MSS) is presented for estimating LDV models with flexible correlation structure in the unobservables. We propose simulators that are continuous in the unknown parameter vectors, and hence standard optimization methods can be used to compute the MSS estimators that employ these simulators. We establish consistency and asymptotic normality of the MSS estimators and derive suitable rates at which the number of simulations must use if biased simulators are used. The estimation method is applied to analyze a model in which the incidence and the extent of debt repayments problems of LDC’s are viewed as optimized choices of the central authorities of the countries in a framework of credit rationing. The econometric implementation of the resulting multi-period probit and Tobit models avoids the need for high dimensional integration. Our findings show that the restrictive error structures imposed by past studies may have led to unreliable econometric results.

Keywords: Simulation model, asymptotic theory, censored model

JEL Classification: C24, C15, C13

Journal of Econometrics
Abstract

We apply a new simulation method that solves the multidimensional probability integrals that arise in maximum likelihood estimation of a broad class of limited dependent variable models. The simulation method has four key features: the simulated choice probabilities are unbiased; they are a continuous and differentiable function of the parameters of the model; they are bounded between 0 and 1; and their computation takes an effort that is nearly linear in the dimension of the probability integral, independent of the magnitudes of the true probabilities. We also show that the new simulation method produces probability estimates with substantially smaller variance than those generated by acceptance-rejection methods or by Stern’s (1987) method. The simulated probabilities can therefore be used to revive the Lerman and Manski (1981) procedure of approximating the likelihood function using simulated choice probabilities by overcoming its computational disadvantages.

Abstract

This paper analyzes price fixing by the Joint Executive Committee railroad cartel from 1880 to 1886 and develops tests of two game-theoretic models of tacit collusion. The first model, due to Abreu, Pearce and Stacchetti (1986), predicts that price will switch across regimes according to a Markov process. The second, by Rotemberg and Saloner (1986), postulates that price wars are more likely in periods of high industry demand. Switching regressions are used to model the firms’ shifting between collusive and punishment behavior. The main econometric novelty in the estimation procedures introduced in this paper is that misclassification probabilities are allowed to vary endogenously over time. The JEC data set is expanded to include measures of grain production to be shipped and availability of substitute transportation services. Our findings cast doubt on the applicability of the Rotemberg and Saloner model to the JEC railroad cartel, while they confirm the Markovian prediction of the Abreu, et al. Model.

Keywords: Tact collusion, cartels, price competition, railroads, transportation

JEL Classification: 026, 022, 611, 615

Abstract

This paper employs panel-data econometric techniques to explore the relations between measures of credit worthiness and the debt discounts on the secondary markets. It investigates empirically whether the secondary market discounts reflect a history of past repayments problems or whether they anticipate future debt crises. The answer to this question has implications about the desirability of debt relief. The main finding is that the secondary markets do not seem rapidly to absorb economic information, which suggests that they are still in their evolutionary stage and are not very efficient. The estimated models are also used to analyze issues in the international finance literature.

JEL Classification: 431, 443, 411

Keywords: Debt crisis, debt relief, international finance

Abstract

A model is presented in which aggregation over microsectors, each in different extent of disequilibrium, has implications analogous to the standard single aggregate sector switching disequilibrium model. Empirical implementation of the model of this paper is less involved than estimation of the standard model. Hence the approach here may be seen both as providing an underlying micro justification for the switching disequilibrium model, and as a computationally simpler (though statistically less efficient) technique. The model is estimated from post-war labour market quarterly data for the U.S. Manufacturing sector. We find the supply side more satisfactorily determined than in past disequilibrium studies.

Abstract

This paper analyzes the consistency properties of classical estimators for limited dependent variables models, under conditions of serial correlation in the unobservables. A unified method of proof is used to show that for certain cases (e.g., Probit, Tobit and Normal Switching Regimes models, which are normality-based) estimators that neglect particular types of serial dependence (specifically, corresponding to the class of “mixing” processes) are still consistent. The same line of proof fails for the analogues to the above models that impose logistic distributional assumptions, thus indicating that normality plays a special role in these problems. Sets of Monte-Carlo experiments are then carried out to investigate these theoretical results.

JEL Classification: 211

Keywords: Consistency, Serial dependence, Mixing processes, Limited dependent variables models, Probit, Logit, Tobit, Normality