Year of Award
Master of Philosophy (MPhil)
Department of Mathematics.
Convex programming;Mathematical optimization
In the literature, error bound conditions have been widely used for studying the linear convergence rates of various first-order algorithms and the majority of literature focuses on how to sufficiently ensure these error bound conditions, usually posing more assumptions on the model under discussion. In this thesis, we focus on the alternating direction method of multipliers (ADMM), and show that the known error bound conditions for studying ADMM's linear convergence, can indeed be further weakened if the error bound is studied over the specific iterative sequence generated by ADMM. A so-called partial error bound condition, which is tailored for the specific ADMM's iterative scheme and weaker than known error bound conditions in the literature, is thus proposed to derive the linear convergence of ADMM. We further show that this partial error bound condition theoretically justifies the difference if the two primal variables are updated in different orders in implementing ADMM, which had been empirically observed in the literature yet no theory is known so far.
Includes bibliographical references (pages 51-55).
Zeng, Shangzhi, "Algorithm-tailored error bound conditions and the linear convergence rae of ADMM" (2017). Open Access Theses and Dissertations. 474.
Available for download on Monday, April 06, 2020