Academic Commons

Theses Doctoral

High-dimensional asymptotics: new insights and methods

Wang, Shuaiwen

As an important element of statistics, linear model y = Ax + w has gained a lot of attention for decades. With the emergence of new data, new problems and new techniques, it is still of great interest to study this model under different settings. In this thesis, we focus on an asymptotic framework where the number of observation n is comparable to the number of variables p, and only a subset of k components of the coefficient vector x are nonzero with k being comparable to p. The prediction, variable selection and concentration properties of several techniques are studied. Regarding variable selection, we consider a class of two stage variable selection procedures, where we generate an optimally tuned Bridge regression estimator x = argmin{x} 1/2 ||y - Ax||2^2 + gamma ||x|| q^q in the first stage, and threshold this estimator in the second stage. We then compare LASSO with our two stage procedures. Further we discuss the best choice of q in the first stage. It turns out that the variable selection performance of such procedures depends on the estimation mean-square error (MSE) of the Bridge estimator. This motivates us to further study the estimation accuracy of Bridge estimators and compare their MSEs for different choices of q. The tool of approximate message passing enables us to characterize the limiting MSE and provide accurate comparison between different estimators. Next we move our focus to the SLOPE estimator x := argmin 1/2 ||y - Ax|| 2^2 + gamma sum {i=1}^p lambda_i |x|_{(i)}, where lambda_1 geq ... geq lambda_p geq 0 are the regularization parameters and |x|(1) geq ... geq |x|(p) geq 0 denote the components of the signal (or regression coefficients) in the decreasing order. We provide an accurate comparison between the MSE of SLOPE and that of the bridge estimators. The non-separable nature of SLOPE makes it hard to characterize its limiting MSE as p --> infinity. Hence we first prove concentration inequalities for its MSE under finite sample and characterize the concentrated mean through a system of equations. By using the concentration results, we show SLOPE has larger MSE than LASSO in a low noise regime and larger MSE than Ridge in a large noise regime.


  • thumnail for Wang_columbia_0054D_15779.pdf Wang_columbia_0054D_15779.pdf application/pdf 2.31 MB Download File

More About This Work

Academic Units
Thesis Advisors
Maleki, Arian
Ph.D., Columbia University
Published Here
February 21, 2020