Previous Articles    

A GREEDY ALGORITHM FOR SPARSE PRECISION MATRIX APPROXIMATION

Didi Lv1, Xiaoqun Zhang2   

  1. 1. School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China;
    2. School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
  • Received:2019-06-27 Revised:2019-12-25 Published:2021-10-15
  • Supported by:
    This work was supported by National key research and development program (No. 2017YFB0202902) and NSFC (No.11771288; No.12090024). We also thank the Student Innovation Center at Shanghai Jiao Tong University for providing us the computing services.

Didi Lv, Xiaoqun Zhang. A GREEDY ALGORITHM FOR SPARSE PRECISION MATRIX APPROXIMATION[J]. Journal of Computational Mathematics, 2021, 39(5): 693-707.

Precision matrix estimation is an important problem in statistical data analysis. This paper proposes a sparse precision matrix estimation approach, based on CLIME estimator and an efficient algorithm GISSρ that was originally proposed for l1 sparse signal recovery in compressed sensing. The asymptotic convergence rate for sparse precision matrix estimation is analyzed with respect to the new stopping criteria of the proposed GISSρ algorithm. Finally, numerical comparison of GISSρ with other sparse recovery algorithms, such as ADMM and HTP in three settings of precision matrix estimation is provided and the numerical results show the advantages of the proposed algorithm.

CLC Number: 

[1] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2:1(2009), 183-202,
[2] P. J. Bickel, E. Levina, et al., Covariance regularization by thresholding, The Annals of Statistics, 36:6(2008), 2577-2604.
[3] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[4] M. Burger, G. Gilboa, S. Osher, J. Xu, et al., Nonlinear inverse scale space methods, Communications in Mathematical Sciences, 4:1(2006), 179-212.
[5] M. Burger, M. Möller, M. Benning, and S. Osher, An adaptive inverse scale space method for compressed sensing, Mathematics of Computation, 82:281(2013), 269-299.
[6] J.F. Cai, S. Osher, and Z. Shen, Convergence of the linearized bregman iteration for l1-norm minimization, Mathematics of Computation, 78:268(2008), 2127-2136.
[7] J.F. Cai, S. Osher, and Z. Shen, Linearized bregman iterations for compressed sensing, Mathematics of Computation, 78:267(2009), 1515-1536.
[8] T.T. Cai, W. Liu, and X. Luo, A Constrained l1 Minimization Approach to Sparse Precision Matrix Estimation, Journal of the American Statistical Association, 106:494(2011), 594-607.
[9] T.T. Cai, W. Liu, and H.H. Zhou, Estimating Sparse Precision Matrix:Optimal Rates of Convergence and Adaptive Estimation, Annals of Statistics, 44:2(2016), 455-488.
[10] E. Candes and J. Romberg, l1-magic:Recovery of sparse signals via convex programming, URL:www.acm.caltech.edu/l1magic/downloads/l1magic.pdf, 4:14(2005).
[11] T.F.C. Chan and R. Glowinski, Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations, Computer Science Department, Stanford University Stanford, 1978.
[12] A. d'Aspremont, O. Banerjee, and L. El Ghaoui, First-order methods for sparse covariance selection, SIAM Journal on Matrix Analysis and Applications, 30:1(2008), 56-66.
[13] E. Esser, Applications of lagrangian-based alternating direction methods and connections to split bregman, CAM report, 9(2009), 31.
[14] J. Fan, Y. Feng, and Y. Wu, Network exploration via the adaptive lasso and scad penalties, The Annals of Applied Statistics, 3:2(2009), 521.
[15] J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96:456(2001), 1348-1360.
[16] S. Foucart, Hard thresholding pursuit:An algorithm for compressive sensing, SIAM Journal on Numerical Analysis, 49:6(2011), 2543-2563.
[17] F. Simon, Stability and robustness of weak orthogonal matching pursuits, Recent Advances in Harmonic Analysis and Applications, Springer, (2012), 395-405.
[18] J. Friedman, T. Hastie, and R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso, Biostatistics, 9:3(2008), 432-441.
[19] D. Gabay and B. Mercier, A dual algorithm for the solution of non linear variational problems via finite element approximation, Institut de recherche d'informatique et d'automatique, 1975.
[20] E.T. Hale, W. Yin, and Y. Zhang, Fixed-point continuation for l1-minimization:Methodology and convergence, SIAM Journal on Optimization, 19:3(2008), 1107-1130.
[21] C. Lam and J. Fan, Sparsistency and rates of convergence in large covariance matrix estimation, Annals of Statistics, 37:6B (2009), 4254.
[22] W. Liu and X. Luo, Fast and adaptive sparse precision matrix estimation in high dimensions, Journal of Multivariate Analysis, 135(2015), 153-162.
[23] S. Mallat and Z. Zhang, Matching pursuit with time-frequency dictionaries, Courant Institute of Mathematical Sciences, New York, Unit, Tech. Rep., 1993.
[24] M. Möller and Xiaoqun Zhang, Fast sparse reconstruction:Greedy inverse scale space flows, Mathematics of Computation, 85(2016), 179-208.
[25] D. Needell and J. A. Tropp, Cosamp:Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26:3(2009), 301-321.
[26] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Modeling & Simulation, 4:2(2005), 460-489.
[27] S. Osher, Y. Mao, B. Dong, and W. Yin, Fast linearized bregman iteration for compressive sensing and sparse denoising, arXiv preprint arXiv:1104.0262, 2011.
[28] Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, Orthogonal matching pursuit:Recursive function approximation with applications to wavelet decomposition, IEEE Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, (1993), 40-44.
[29] P. Ravikumar, M.J. Wainwright, G. Raskutti, B. Yu et al., High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence, Electronic Journal of Statistics, 5(2011), 935-980.
[30] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu, Sparse permutation invariant covariance estimation, Electronic Journal of Statistics, 2:3(2008), 494-515.
[31] J.A. Tropp, Greed is good:Algorithmic results for sparse approximation, IEEE Transactions on Information Theory, 50:10(2004), 2231-2242.
[32] J.A. Tropp and A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on Information Theory, 53:12(2007), 4655-4666.
[33] W.B. Wu and M. Pourahmadi, Nonparametric estimation of large covariance matrices of longitudinal data, Biometrika, 90:4(2003), 831-844.
[34] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for l1-minimization with applications to compressed sensing, SIAM Journal on Imaging Sciences, 1:1(2008), 143-168.
[35] M. Yuan, High dimensional inverse covariance matrix estimation via linear programming, Journal of Machine Learning Research, 11:Aug (2010), 2261-2286.
[36] M. Yuan and Y. Lin, Model selection and estimation in the gaussian graphical model, Biometrika, 94:1(2007), 19-35.
[37] X. Zhang, M. Burger, and S. Osher, A unified primal-dual algorithm framework based on bregman iteration, Journal of Scientific Computing, 46:1(2011), 20-46.
[38] H. Zou, The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101:476(2006), 1418-1429.
[1] Yijun Zhong, Chongjun Li. PIECEWISE SPARSE RECOVERY VIA PIECEWISE INVERSE SCALE SPACE ALGORITHM WITH DELETION RULE [J]. Journal of Computational Mathematics, 2020, 38(2): 375-394.
Viewed
Full text


Abstract