• 论文 • 上一篇    下一篇

求解Einstein-积张量方程的混合贪婪随机坐标下降法

谢亚君1, 马昌凤2   

  1. 1. 福州外语外贸学院理工学院, 长乐 350202;
    2. 福建师范大学数学与信息学院, 福州 350007
  • 收稿日期:2020-03-21 出版日期:2021-12-15 发布日期:2021-12-07
  • 基金资助:
    福建省自然科学基金项目(2019J01879),福建省高校新世纪优秀人才计划项目(闽教科[2017]52)资助.

谢亚君, 马昌凤. 求解Einstein-积张量方程的混合贪婪随机坐标下降法[J]. 数值计算与计算机应用, 2021, 42(4): 323-336.

Xie Yajun, Ma Changfeng. HYBRID GREEDY RANDOMIZED COORDINATE DESCENT METHOD FOR SOLVING TENSOR EQUATION ACCOMPANIED BY EINSTEIN PRODUCT[J]. Journal on Numerica Methods and Computer Applications, 2021, 42(4): 323-336.

HYBRID GREEDY RANDOMIZED COORDINATE DESCENT METHOD FOR SOLVING TENSOR EQUATION ACCOMPANIED BY EINSTEIN PRODUCT

Xie Yajun1, Ma Changfeng2   

  1. 1. Institute of Science and Technology, Fuzhou University of International Studies and Trade, Changle 350202, China;
    2. College of Mathematics and Informatics, Fujian Normal University, Fuzhou 350007, China
  • Received:2020-03-21 Online:2021-12-15 Published:2021-12-07
贪婪随机坐标下降法是当前提出的求解最小二乘问题的有效方法.本文通过引入“残量最大下降指标”,给出了贪婪随机坐标下降法的修正版本及一个有效的两步混合加速算法.同时,将这些改进的有效算法用来求解带有Einstein-积的张量方程.理论及数值实验都充分验证了所提出算法的可行性和有效性.
Greedy randomized coordinate descent method is an efficient method in present for solving large least-square problem. By introducing the ‘index of maximum decline of residual error’, in this paper, a modified version of greedy randomized coordinate descent method and a two-step hybrid approach are first investigated. Meanwhile, the modified methods are utilized to solve the tensor equation with Einstein-product. Theory and some numerical experiments illustrate the feasibility and efficiency of the proposed method.

MR(2010)主题分类: 

()
[1] Bader B, Kolda T. MATLAB Tensor Toolbox Version 2.5. 2012. http://www.sandia.gov/tgkolda/TensorToolbox/.
[2] Bai X, Huang Z, Wang Y. Global uniqueness and solvability for tensor complementarity problems[J]. J. Optim. Theory Appl., 2016, 170(1):72-84.
[3] Bai Z, Wu W. On greedy randomized coordinate descent methods for solving large linear leastsquares problems[J]. Numer. Linear Algebra Appl., 2019, 26:e2237.
[4] Bai Z, Wu W. On greedy randomized Kaczmarz method for solving large sparse linear systems[J]. SIAM J. Sci. Comput., 2018, 40(1):592-606.
[5] Bai Z, Wu W. On partially randomized extended Kaczmarz method for solving large sparse overdetermined inconsistent linear systems[J]. Linear Algebra Appl., 2019, 578, 225-250.
[6] Bai Z, Wu W. On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems[J]. Appl. Math. Lett., 2018, 83:21-26.
[7] Bai Z, Wu W. On convergence rate of the randomized Kaczmarz method[J]. Linear Algebra Appl., 2018, 553:252-269.
[8] Behera R, Mishra D. Further results on generalized inverses of tensors via the Einstein product[J]. Linear Multilinear Algebra, 2017, 65:1662-1682.
[9] Brazell M, Li N, Navasca C, Tamon C. Solving multilinear systems via tensor inversion[J]. SIAM J. Matrix Anal. Appl., 2013, 34:542-570.
[10] Breheny P, Huang J. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection[J]. Ann. Appl. Stat., 2011, 5(1):232-253.
[11] Canutescu A, Dunbrack J. Cyclic coordinate descent:A robotics algorithm for protein loop closure[J]. Protein Sci., 2003, 12(5):963-972.
[12] Chang K, Hsieh C, Lin C. Coordinate descent method for large-scale L2-loss linear support vector machines[J]. J. Mach. Learn Res., 2008, 9:1369-1398.
[13] Cichocki A, Zdunek R, Phan A H, Amari S I. Nonnegative matrix factorization and Tensor Factorization, Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, 2009.
[14] Ding W, Wei Y. Solving multilinear systems with M-tensors[J]. J. Sci. Comput., 2016, 68:689-715.
[15] Ding W, Wei Y. Generalized tensor eigenvalue problems[J]. SIAM J. Matrix Anal. Appl., 2015, 36:1073-1099.
[16] Einstein A. The foundation of the general theory of relativity, The Collected Papers of Albert Einstein (Princeton University Press, Princeton) (A.J. Kox, M.J. Klein, and R. Schulmann, eds.), 2007, 6:146-200.
[17] Gandy S, Recht B, Yamada I. Tensor completion and low-n-rank tensor recovery via convex optimization[J]. Inverse Problems, 2011, 27(2).
[18] Huang B, Xie Y, Ma C. Krylov subspace methods to solve a class of tensor equations via the Einstein product[J]. Numer. Linear Algebra Appl., 2019, 26(4):e2254.
[19] Khoromskij B N. Tensor numerical methods for high-dimensional PDEs:basic theory and initial applications[J]. ESAIM:Proc Surv., 2015, 48:1-28.
[20] Kolda T G, Bader B W. Tensor decompositions and applications[J]. SIAM review, 2009, 51(3):455-500.
[21] Lai W M, Rubin D, Krempl E. Introduction to Continuum Mechanics, fourth ed., Oxford, Butterworth-Heinemann, 2010.
[22] Lathauwer L D. Decompositions of a higher-order tensor in block terms c part ii:Definitions and uniqueness[J]. SIAM J. Matrix Anal. Appl., 2008, 30(3).
[23] Leventhal D, Lewis A. Randomized methods for linear constraints:Convergence rates and conditioning[J]. Math. Oper. Res., 2010, 35(3):641-654.
[24] Li W, Liu D, Vong S W. Comparison results for splitting iterations for solving multi-linear systems[J]. Appl. Numeri. Math., 2018, 134:105-121.
[25] Li Z, Dai Y H, Gao H. Alternating projection method for a class of tensor equations[J]. J. Comput. Appl. Math., 2019, 346:490-504.
[26] Li B, Tian S, Sun Y, Hu Z. Schur-decomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method[J]. Jf Comput. Phys., 2010, 229:1198-1212.
[27] Luo Z, Qi L, Xiu N. The sparsest solutions to Z-tensor complementarity problems[J]. Optim. lett., 2017, 11(3):471-482.
[28] Lu Z, Xiao L. On the complexity analysis of randomized block-coordinate descent methods[J]. Math. Program., 2015, 152(1-2):615-642.
[29] Luo Z, Tseng P. On the convergence of the coordinate descent method for convex differentiable minimization[J]. J. Optim. Theory Appl., 1992, 72(1):7-35.
[30] Malek A, Bojdi Z, Golbarg P. Solving fully three-dimensional microscale dual phase lag problem using mixed-collocation finite difference discretization[J]. J. Heat Trans., 2012, 134(094504).
[31] Necoara I, Nesterov Y, Glineur F. Random block coordinate descent methods for linearly constrained optimization over networks[J]. J. Optim. Theory Appl., 2017, 173(1):227-254.
[32] Nesterov Y. Efficiency of coordinate descent methods on huge-scale optimization problems[J]. SIAM J. Optim., 2012, 22(2):341-362.
[33] Ng M K, Qi L, Zhou G. Finding the largest eigenvalue of a nonnegative tensor[J]. SIAM J. Matrix Anal. Appl., 2009, 31:1090-1099.
[34] Qi L, Luo Z. Tensor Analysis:Spectral Theory and Special Tensors. SIAM, 2017.
[35] Qi L, Eigenvalues of a real supersymmetric tensor[J]. J. Symbolic Comput., 2005, 40(6):1302-1324.
[36] Qi L, Wang Y, Wu E. D-eigenvalues of diffusion kurtosis tensors[J]. Journal of Comput. Appl. Math., 2008, 221:150-157.
[37] Silva V D, Lim L. Tensor rank and the ill-posedness of the best low-rank approximation problem[J]. SIAM J. Matrix Anal. Appl., 2008, 30(3):1084-1127.
[38] Smilde A, Bro R, Geladi P. Multi-way Analysis:Applications in the Chemical Sciences[M]. Wiley & Sons, New York, 2004.
[39] Song Y, Qi L. Spectral properties of positively homogeneous operators induced by higher order tensors[J]. SIAM J. Matrix Anal. Appl., 2013, 34:1581-1595.
[40] Song Y, Qi L. Properties of some classes of structured tensors[J]. J. Optim. Theory Appl., 2015, 165(3):854-873.
[41] Song Y, Qi L. Tensor complementarity problem and semi-positive tensors[J]. J. Optim. Theory Appl., 2016, 169(3):1069-1078.
[42] Strohmer T, Vershynin R. A randomized Kaczmarz algorithm with exponential convergence[J]. J. Fourier Anal. Appl., 2009, 15:262-278.
[43] Sun L, Zheng B, Bu C, Wei Y. Moore-Penrose inverse of tensors via Einstein product[J]. Linear Multi. Algebra, 2016, 64:686-698.
[44] Wang X, Chen H, Wang Y. Solution structures of tensor complementarity problem[J]. Front. Math. China, 2018, 6:1-11.
[45] Wang Q, Xu X. Iterative algorithms for solving some tensor equations[J]. Linear and Multilinear Algebra, 2019, 67:1325-1349.
[46] Wright S. Coordinate descent algorithms[J]. Math. Program., 2015, 151(1):3-34.
[47] Xu X, Wang Q. Extending BiCG and BiCR methods to solve the Stein tensor equation[J]. Comput. Math. Appl., 2019, 77:3117-3127.
[48] Yang Y, Feng Y, Suykens J. A rank-one tensor updating algorithm for tensor completion[J]. IEEE Signal Proc. Lett., 2015, 22(10):1633-1637.
[49] Zhang W, Han D, Li Z. A self-adaptive projection method for solving the multiple-sets split feasibility problem[J]. Inverse Problems, 2009, 25(11):115001.
[50] Zhao Z, Yang Q. A simple projection method for solving the multiple-sets split feasibility problem[J]. Inverse Probl. Sci. Eng., 2013, 21(3):537-546.
[51] Zhao J, Yang Q. Several acceleration schemes for solving the multiple-sets split feasibility problem[J]. Linear Algebra Appl., 2012, 437:1648-1657.
[1] 李启勇, 甘四清, 张浩敏. 随机延迟积分微分方程改进分步向后Euler方法的均方指数稳定性[J]. 数值计算与计算机应用, 2013, 34(4): 241-248.
[2] 胡鹏, 黄乘明. 非线性延迟积分微分方程线性多步法的渐近稳定性[J]. 数值计算与计算机应用, 2010, 31(2): 116-122.
阅读次数
全文


摘要