Skip to main content
Discussion Paper

A Zero-One Result for the Least Squares Estimator

The least squares estimator for the linear regression model is shown to converge to the true parameter vector either with probability one or with probability zero under weak conditions on the dependent random variable and regressor variables. No additional conditions are placed on the errors. The dependent and regressor variables are assumed to be weakly dependent — in particular, to be strong mixing. The regressors may be fixed or random and must exhibit a certain degree of independent variability. No further assumptions are needed.