Maximum likelihood ratio theory contributes tremendous success to parametric inferences, due to the fundamental theory of Wilks (1938). Yet, there is no general applicable approach for nonparametric inferences based on function estimation. Maximum likelihood ratio test statistics in general may not exist in nonparametric function estimation setting. Even if they exist, they are hard to find and can not be optimal as shown in this paper. In this paper, we introduce the sieve likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. New Wilks' phenomenon is unveiled. We demonstrate that the sieve likelihood statistics are asymptotically distribution free and follow X2-distributions under the null hypotheses for a number of useful hypotheses and a variety of useful models including Caussian white noise models, nonparametric regression models, varying coefficient models and generalized varying coefficient models. We further demonstrate that sieve likelihood ratio statistics are asymptotically optimal in the sense that they achieve optimal rates of convergence given by Ingster (1993). They can even be adaptively optimal in the sense of Spokoiny (1996) by using a simple choice of adaptive smoothing parameter. Our work indicates that the sieve likelihood ratio statistics are indeed general and powerful for nonparametric inferences based on function estimation.

## Type of Work

Article (18) Book (0) Theses (0) Multimedia (0)

## Peer Review

Peer-reviewed only (7)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (0) UC Davis (0) UC Irvine (0) UCLA (11) UC Merced (0) UC Riverside (0) UC San Diego (7) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

Department of Statistics, UCLA (11)

## Journal

## Discipline

## Reuse License

## Scholarly Works (18 results)

Varying-coefficient models are a useful extension of the classical linear models. The appeal of these models is that the coefficient functions can easily be estimated via a simple local regression. This yields a simple one-step estimation procedure. We show that such a one-step method cannot be optimal when different coefficient functions admit di erent degrees of smoothness. This drawback can be repaired by using our proposed two-step estimation procedure. The asymptotic mean-squared errors for the two-step procedure is obtained and is shown to achieve the optimal rate of convergence. A few simulation studies show that the gain by the two-step procedure can be quite substantial. The methodology is illustrated by an application to an environmental dataset.

Conditional heteroscedasticity has been often used in modelling and understanding the variability of statistical data. Under a general setup which includes the nonlinear time series model as a special case, we propose an e cient and adaptive method for estimating the conditional variance. The basic idea is to apply a local linear regression to the squared residuals. We demonstrate that without knowing the regression function, we can estimate the conditional variance asymptotically as well as if the regression were given. This asymptotic result, established under the assumption that the observations are made from a strictly stationary and absolutely regular process, is also veri ed via simulation. Further, the asymptotic result paves the way for adapting an automatic bandwidth selection scheme. An application with nancial data illustrates the usefulness of the proposed techniques.

Brumback and Rice are to be congratulated for this neat and excellent paper on the smoothing spline models for the analysis of nested and crossed samples of curves. Of particularly important are the connections between smoothing spline methods and the mixed e ects models. Such connections are not only important in our intuitive understanding of the smoothing spline methods, but also elegant in deriving methods for selecting smoothing parameters. With modern technology, data can nowadays easily be collected in a form of curves. Fully processing the information contained in the sample curves is a challenging and emerging subject in statistics. The subject has strong connections with traditional longitudinal data analysis (see for example Diggle, Liang and Zeger 1994 and Hand and Crowder 1996). The ideas presented in the Brumback and Rice and this discussion are expected to have strong impact on both functional data analysis and longitudinal data analysis. We welcome the opportunity to make a few comments and to present other simple alternative methods that will be helpful for the future development of the subject.

Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernal methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the proposed approach. Simulation studies show that our two-step approach improves the kernel method proposed in Hoover, et al (1998) in several aspects such as accuracy, computation time and visual appealingness of the estimators.

Several new tests are proposed for examining the adequacy of a family of parametric models against large nonparametric alternatives. These tests formally check if the bias vector of residuals from parametric fits is negligible by using the adaptive Neyman test and other methods. The testing procedures formalize the traditional model diagnostic tools based on residual plots. We examine the rates of contiguous alternatives that can be detected consistently by the adaptive Neyman test. Applications of the procedures to the partially linear models are thoroughly discussed. Our simulation studies show that the new testing procedures are indeed powerful and omnibus. The power of the proposed tests is comparable to the F-test statistic even in the situations where F-test is known to be suitable and can be far more powerful than the F-test statistic in other situations. An application to testing linear models versus additive models are addressed.

With modern technology, massive data can easily be collected in a form of multiple sets of curves. New statistical challenge includes testing whether there is any statistically signi cant di erence among these sets of curves. In this paper, we propose some new tests for comparing two groups of curves based on the adaptive Neyman test and the wavelet thresholding techniques introduced in Fan (1996). We demonstrate that these tests inherit the properties outlined in Fan (1996) and they are simple and powerful for detecting di erences between two sets of curves. We then further generalize the idea to compare multiple sets of curves, resulting in an adaptive high-dimensional analysis of variance, called HANOVA. These newly developed techniques are illustrated by using a dataset on pizza commercial where observations are curves and an analysis of cornea topography in ophthalmology where images of individuals are observed. A simulation example is also presented to illustrate the power of the adaptive Neyman test.

*δ*)-th moment for any

*δ*> 0. We establish a sharp phase transition for robust estimation of regression parameters in both low and high dimensions: when

*δ*≥ 1, the estimator admits a sub-Gaussian-type deviation bound without sub-Gaussian assumptions on the data, while only a slower rate is available in the regime 0 <

*δ*< 1 and the transition is smooth and optimal. In addition, we extend the methodology to allow both heavy-tailed predictors and observation noise. Simulation studies lend further support to the theory. In a genetic study of cancer cell lines that exhibit heavy-tailedness, the proposed methods are shown to be more robust and predictive.

This paper deals with statistical inferences based on the generallized varying-coefficient models proposed by Hastic and Tibshirani (1993). Local polynomial regression techniques are used to estimate coefficient functions and the asymptotic normality of the resulting estimators is established. The standard error formulas for estimated coeffiecients are derived and are empirically tested. A goodness-of-fit test technique, based on a nonparametric maximum likelihood ratio type of test is also proposed to detect whether certain coefficient functions in a varying-coefficient model are constant or whether any covariates are statistically significant in the model. The null distribution of the test is estimated by a conditional bootstrap method. Our estimation techniques involve solving hundreds of local likelihood equations. To reduce computation burden, a one-step Newton-Raphson estimator is proposed and implemented. We show that the resulting one-step procedure can save computational cost in an order of tens without deteriorating its performance, both asymptotically and empirically. Both simulated and real data examples are used to illustrate our proposed methodology.