Let H be an infinite-dimentional real separable Hilbert space. Given an unknown mapping M : H (arrow) H that can only be observed with noise, we consider two modified Robbins-Monro procedures to estimate the zero point (theta) (subscript 0) ? H of M. These procedures work in appropriate finite dimensional sub-spaces of growing dimension. Almost-sure convergence, functional central limit theorem (hence asymptotic normality), law of iterated logarithm (hence almost-sure loglog rate of convergence), and mean rate of convergence are obtained for Hilbert space-valued mixingale, (theta)-dependent error processes.
Let f(y|x,z) (resp. f(y|x) be the conditional density of Y given (X,Z) (resp. X). We construct a class of `smoothed` empirical likelihood-based tests for the conditional independence hypothesis: Pr[f(Y|X,Z)=f(Y|X)]=1. We show that the test statistics are asymptotically normal under the null hypothesis and derive their asymptotic distributions under a sequence of local alternatives. The tests are shown to possess a weak optimality property in large samples. Simulation results suggest that the tests behave well in finite samples. Applications to some economic and financial time series indicate that our tests reveal some interesting nonlinear causal relations which the traditional linear Granger causality test fails to detect.
In this paper we provide considerable Monte Carlo evidence on the finite sample performance of several alternative forms of White's [1982] IM test. Using linear regression and probit models, we extend the range of previous analysis in a manner that reveals new patterns in the behavior of the asymptotic version of the IM test - particularly with respect to curse of dimensionality effects. We also explore the potential of parametric and nonparametric bootstrap methods for reducing the size bias that characterizes the asymptotic IM test. The nonparametric bootstrap is of particular interest because of the weak conditions it imposes, but the results of our Monte Carlo experiments suggest that this technique is not without limitations. The parametric bootstrap demonstrates good size and power in reasonably small samples, but requires assumptions that may be auxiliary from the standpoint of a QMLE. We observe that the effects of violating one of these auxiliary assumptions has a non-trivial impact on the size of IM tests that employ this technique.
We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to the moving blocks bootstrap of Kunsch (1989) and Liu and Singh (1992) and prove the first order asymptotic validity of the bootstrap approximation to the true distribution of quasi-maximum likelihood estimators. We also consider bootstrap testing. In particular, we prove the first order asymptotic validity of the bootstrap distribution of suitable bootstrap analogs of Wald and Lagrange Multiplier statistics for testing hypotheses.
The bootstrap is an increasingly popular method for performing statistical inference. This paper provides the theoretical foundation for using the bootstrap as a valid tool of inference for quasi-maximum likelihood estimators (QMLE). We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to two block bootstrap methods, the moving blocks bootstrap of Künsch (1989) and Liu and Singh (1992) and the stationary bootstrap of Politis and Romano (1994), and prove the first order asymptotic validity of the bootstrap approximation to the true distribution of QML estimators. Further, these block bootstrap methods are shown to provide heteroskedastic and autocorrelation consistent standard errors for the QMLE, thus extending the already large literature on robust inference and covariance matrix estimation. We also consider bootstrap testing. In particular, we prove the first order asymptotic validity of the bootstrap distribution of a suitable bootstrap analog of a Wald test statistic for testing hypotheses.
This paper proposes a nonparametric test of conditional independence based on the notion that two conditional distributions are equal if and only if the corresponding conditional characteristic functions are equal. We use the functional delta method to expand the test statistic around the population truth and establish asymptotic normality under $\beta -$mixing conditions. We show that the test is consistent and has power against local alternatives at distance $n^{-1/2}h_{1}^{-(d_{1}+d_{3})/4}.$ The cases for which not all random variables of interest are\ continuously valued or observable are also treated, and we show that the test is nuisance-parameter free. Simulation results suggest that the test has better finite sample performance than the Hellinger metric test of Su and White (2002) in detecting nonlinear Granger causality in the mean. Applications to exchange rates and to stock prices and trading volumes indicate that our test can reveal some interesting nonlinear causal relations that the traditional linear Granger causality test fails to detect.
We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessarily useful for real-time forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. We propose an alternative framework for out-of-sample comparison of predictive ability which delivers more practically relevant conclusions. Our approach is based on inference about conditional expectations of forecasts and forecast errors rather than the unconditional expectations that are the focus of the existing literature. We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared to previous approaches, our tests are valid under more general data assumptions (heterogeneity rather than stationarity) and estimation methods, and they can handle comparison of both nested and non-nested models, which is not currently possible. To illustrate the usefulness of the proposed tests, we compare the forecast performance of three leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors: a sequential model selection approach, the "diffusion indexes" approach of Stock and Watson (2002), and the use of Bayesian shrinkage estimators.
The m-testing approach provides a general and convenient framework in which to view and construct specification tests for econometric models. Previous m-testing frameworks only consider test statistics that involve finite dimensional parameter estimators and infinite dimensional parameter estimators affecting the limit distribution of the m-test statistics. In this paper we propose a new m-testing framework using both finite and infinite dimensional parameter estimators, where the latter may or may not affect the limit distribution of the m-test. This greatly extends the potential and flexibility of m-testing. The new m-testing framework can be used to test hypotheses on parametric, semiparametric and nonparametric models. Some examples are given to illustrate how to use it to develop new specification tests
We explore the extension of James-Stein type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator towards a fixed point, we shrink it towards a data-dependent point. We provide an analytic expression for the asymptotic risk and bias of James-Stein type estimators shrunk towards a data-dependent point and prove that they have smaller asymptotic risk than the base estimator. Shrinking an estimator toward a data-dependent point turns out to be equivalent to combining two random variables using the James-Stein rule. We propose a general combination scheme which includes random combination (the James-Stein combination) and the usual nonrandom combination as special cases. As an example, we apply our method to combine the Least Absolute Deviations estimator and the Least Squares estimator. Our simulation study indicates that the resulting combination estimators have desirable finite sample properties when errors are drawn from symmetric distributions. Finally, using stock return data we present some empirical evidence that the combination estimators have the potential to improve out-of-sample prediction in terms of both mean square error and mean absolute error.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.