Skip to main content

Wayne Yuan Gao Publications

Discussion Paper
Abstract

We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975, 1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the n −s/(2s+1) convergence rate and asymptotic normality for the RMS estimator under order-s Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.

Discussion Paper
Abstract

This paper proposes a novel framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) global optimization is equivalent to optimal strategy formation in a two-armed decision problem with known distributions, based on the Strategic Law of Large Numbers we establish; and (2) a sign-based strategy based on the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of Strategic Monte Carlo Optimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient (or practically the first difference) of the objective function, without the need of solving PDEs. While this simple strategy is not generally optimal, it is sufficient for our SMCO algorithm to converge to a local optimizer from a single starting point, and to a global optimizer under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization well beyond the theoretical guarantees established herein. For a wide range of test functions with challenging landscapes (multi-modal, non-differentiable and discontinuous), our SMCO algorithms perform robustly well, even in high-dimensional (d = 200 ∼ 1000) settings. In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.

Discussion Paper
Abstract

This paper proposes a novel framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) global optimization is equivalent to optimal strategy formation in a two-armed decision problem with known distributions, based on the Strategic Law of Large Numbers we establish; and (2) a sign-based strategy based on the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of Strategic Monte Carlo Optimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient (or practically the first difference) of the objective function, without the need of solving PDEs. While this simple strategy is not generally optimal, it is sufficient for our SMCO algorithm to converge to a local optimizer from a single starting point, and to a global optimizer under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization well beyond the theoretical guarantees established herein. For a wide range of test functions with challenging landscapes (multi-modal, non-differentiable and discontinuous), our SMCO algorithms perform robustly well, even in high-dimensional (d = 200 ∼ 1000) settings. In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.

Discussion Paper
Abstract

This paper studies the semiparametric estimation and inference of integral functionals on submanifolds, which arise naturally in a variety of econometric settings. For linear integral functionals on a regular submanifold, we show that the semiparametric plugin estimator attains the minimax-optimal convergence rate ns2s+d-m, where s is the Hölder smoothness order of the underlying nonparametric function, d is the dimension of the first-stage nonparametric estimation, m is the dimension of the submanifold over which the integral is taken. This rate coincides with the standard minimax-optimal rate for a (d − m)-dimensional nonparametric estimation problem, illustrating that integration over the m-dimensional manifold effectively reduces the problem’s dimensionality. We then provide a general asymptotic normality theorem for linear/nonlinear submanifold integrals, along with a consistent variance estimator. We provide simulation evidence in support of our theoretical results.

Abstract

This paper develops exact finite sample and asymptotic distributions for structural equation tests based on partially restricted reduced form estimates. Particular attention is given to models with large numbers of instruments, wherein the use of partially restricted reduced form estimates is shown to be especially advantageous in statistical testing even in cases of uniformly weak instruments and reduced forms. Comparisons are made with methods based on unrestricted reduced forms, and numerical computations showing finite sample performance of the tests are reported. Some new results are obtained on inequalities between noncentral chi-squared distributions with different degrees of freedom that assist in analytic power comparisons.