Skip to main content
eScholarship
Open Access Publications from the University of California

Essays in Information Theoretic Econometrics and High Dimensional Econometrics

  • Author(s): Mao, Yi
  • Advisor(s): Ullah, Aman
  • Lee, Tae-Hwy
  • et al.
No data is associated with this publication.
Abstract

Since the introduction of the subject of econometrics, parametric functional forms of the relationship between independent and dependent variables are often assumed to be known. Econometric functions built from misspecified parametric assumptions may result in false estimation and invalid implications. For this dissertation I introduce an information theoretic (IT) approach to construct econometric functions based on minimal model assumptions on the variables. I follow the entropy approach initiated by Shannon (1948) to obtain a probability distribution of the variables through maximizing the entropy function. In Chapter 2, I use moment constraints to construct the IT-based maximum entropy density, regression and response functions, the performance of which is investigated through simulation and empirical examples. I demonstrate the maximum entropy econometric functions are purely date-driven, easy to implement and advantageous over parametric and nonparametric kernel methods. Furthermore, in Chapter 3, I extend the use of IT-based maximum entropy estimators to a time series dataset by incorporating a theoretic constraint.

Based on the structure of IT-based estimators, the number of moment conditions increase rapidly with the growing dimensionality of variables. Computational burden of maximum entropy estimation with all the observed moments becomes heavier accordingly. Thus, I link the problem of IT-based maximum entropy estimation and moment selection in Chapter 4. We propose a regularized ME approach to select relevant moments. We also use the entropy ratio test to select moments of which the Lagrange multipliers are significant. Simulation examples show that the probability of selecting relevant moments approaches to one as sample size gets larger. Lastly, I explore the regularization of high dimensional matrices in Chapter 5. I deal with the challenge of estimating large precision matrices which are often used in a variety of applications. I propose a dynamic conditional precision (DCP) algorithm by embedding a dynamic structure to conditional precision matrices. I show the consistency of the DCP estimator and apply it to a forecast combination application.

Main Content

This item is under embargo until July 22, 2021.