Construction of confidence sets is an important topic in statistical inference. In this dissertation, we propose an adaptive method to construct honest confidence sets for the regression mean vector and a framework to construct confidence sets after model selection. The whole dissertation is divided into two parts.
The issue of honesty in constructing confidence sets arises in nonparametric regression. While the optimal rate in nonparametric estimation can be achieved and utilized to construct sharp confidence sets, severe degradation of confidence level often happens after estimating the degree of smoothness. Similarly, for high-dimensional regression, oracle inequalities for sparse estimators could be utilized to construct sharp confidence sets. Yet the degree of sparsity itself is unknown and needs to be estimated, which causes the honesty problem. To resolve this issue, we develop a novel method to construct honest confidence sets for sparse high-dimensional linear regression. The key idea in our construction is to separate signals into a strong and a weak group, and then construct confidence sets for each group separately. This is achieved by a projection and shrinkage approach, the latter implemented via Stein estimation and the associated Stein unbiased risk estimate. After combining the confidence sets for the two groups, our resulting confidence set is honest over the full parameter space without any sparsity constraints, while its size adapts to the optimal rate of $n^{-1/4}$ when the true parameter is indeed sparse. Moreover, under some form of a separation assumption between the strong and weak signals, the diameter of our confidence set can achieve a faster rate than existing methods.Through extensive numerical comparisons, we demonstrate that our method outperforms other competitors with big margins for finite samples, including oracle methods built upon the true sparsity of the underlying model.
Apart from the construction of joint confidence sets, the construction of confidence sets after model selection is essentially a different and more challenging problem, as the sampling distributions are restricted to irregular subsets, which increases the difficulty in maintaining the confidence level. To address this problem, we develop a new framework, which contains Bayesian interpretation and constructs credible sets conditioning on active sets of lasso estimates. This framework provides flexible choices of the prior distributions serving as regularizers for the credible sets. Our preliminary research demonstrates that certain credible sets are proved to be confidence sets in the frequentist framework, yet the size of credible sets and the adaption of their diameters should be further studied. Lastly, we seek the possibility to generalize this framework into a large amount of generalized linear models and into confidence sets conditioning on block lasso estimates.