Skip to main page content

Accelerated Gradient Algorithms for Variable Selection with Nonconvex Penalties

Contact Us:  kai.yang2@mail.mcgill.ca
Accelerated Gradient Algorithms for Variable Selection with Nonconvex Penalties

Kai Yang, Masoud Asgharian, Sahir Bhatnagar McGill University

Kai Yang, Email:kai.yang2@mail.mcgill.ca

Variable selection has a significant application in areas such as bioinformatics. Nonconvex penalties such as SCAD, MCP possess Oracle property, making them a better choice in general for variable selection comparing to LASSO. However, the nonconvexity and nonsmoothness propose a challenge for statistical computing, particularly for high-dimensional data. When the statistical dimensionality of the data goes high (such as the number of SNPs), second-order methods are usually not efficient due to the need to evaluate secant conditions per step and the lack of global convergence when not performing line search. Coordinate descent usually lacks proof for global convergence. Furthermore, the rate of convergence for such methods is usually not feasible to establish. ISTA is a ``smoothing'' version of gradient descent. However, for ill-conditioned problems, ISTA will not be efficient. FISTA was proposed to solve this issue: Nesterov's accelerated gradient (AG) was used instead of gradient descent. However, Nesterov's AG does not achieve global convergence for nonconvex problems. Thus, we propose an accelerated gradient method for variable selection with nonconvex penalties. Simulation studies show that our method achieves a significant improvement in the convergence rate.

Organization detail
There is no configuration for this page.