http://dx.doi.org/10.1016/j.neucom.2014.10.069">
 

Document Type

Journal Article

Department/Unit

Department of Mathematics

Title

Folded-concave penalization approaches to tensor completion

Language

English

Abstract

© 2014 Elsevier B.V.The existing studies involving matrix or tensor completion problems are commonly under the nuclear norm penalization framework due to the computational efficiency of the resulting convex optimization problem. Folded-concave penalization methods have demonstrated surprising developments in sparse learning problems due to their nice practical and theoretical properties. To share the same light of folded-concave penalization methods, we propose a new tensor completion model via folded-concave penalty for estimating missing values in tensor data. Two typical folded-concave penalties, the minmax concave plus (MCP) penalty and the smoothly clipped absolute deviation (SCAD) penalty, are employed in the new model. To solve the resulting nonconvex optimization problem, we develop a local linear approximation augmented Lagrange multiplier (LLA-ALM) algorithm which combines a two-step LLA strategy to search a local optimum of the proposed model efficiently. Finally, we provide numerical experiments with phase transitions, synthetic data sets, real image and video data sets to exhibit the superiority of the proposed model over the nuclear norm penalization method in terms of the accuracy and robustness.

Keywords

Folded-concave penalization, Local linear approximation, Nuclear norm, Sparse learning, Tensor completion

Publication Date

2015

Source Publication Title

Neurocomputing

Volume

152

Start Page

261

End Page

273

Publisher

Elsevier

ISSN (print)

09252312

This document is currently not available here.

Share

COinS