Affiliation:
1. Department of Statistics, Columbia University, NY 10027, USA
2. Department of Statistics and Probability, Michigan State university, MI 48824, USA
Abstract
Abstract
A recently proposed SLOPE estimator [6] has been shown to adaptively achieve the minimax $\ell _2$ estimation rate under high-dimensional sparse linear regression models [25]. Such minimax optimality holds in the regime where the sparsity level $k$, sample size $n$ and dimension $p$ satisfy $k/p\rightarrow 0, k\log p/n\rightarrow 0$. In this paper, we characterize the estimation error of SLOPE under the complementary regime where both $k$ and $n$ scale linearly with $p$, and provide new insights into the performance of SLOPE estimators. We first derive a concentration inequality for the finite sample mean square error (MSE) of SLOPE. The quantity that MSE concentrates around takes a complicated and implicit form. With delicate analysis of the quantity, we prove that among all SLOPE estimators, LASSO is optimal for estimating $k$-sparse parameter vectors that do not have tied nonzero components in the low noise scenario. On the other hand, in the large noise scenario, the family of SLOPE estimators are sub-optimal compared with bridge regression such as the Ridge estimator.
Funder
National Science Foundation
Publisher
Oxford University Press (OUP)
Subject
Applied Mathematics,Computational Theory and Mathematics,Numerical Analysis,Statistics and Probability,Analysis
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献