Document Type

Journal Article

Department/Unit

Department of Mathematics

Language

English

Abstract

We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. The resulting optimization problem can be solved by using approximate augmented Lagrangian iterations as the outer loop and inertial proximal alternating linearized minimization (iPALM) iterations as the inner loop. The main advantage of this method is that both sparsity and orthogonality can be satisfied in the proposed algorithm. We show the global convergence of the proposed iterative algorithm. We also demonstrate that the proposed algorithm can achieve high sparsity and orthogonality requirements, which are very important for dimensionality reduction. Experimental results on both synthetic and real data sets show that the proposed algorithm can obtain learning performance comparable to that of other tested MI learning algorithms.

Publication Date

12-2018

Source Publication Title

Neural Computation

Volume

30

Issue

12

Start Page

3281

End Page

3308

Publisher

MIT Press

DOI

10.1162/neco_a_01140

Link to Publisher's Edition

https://doi.org/10.1162/neco_a_01140

ISSN (print)

08997667

ISSN (electronic)

1530888X

Included in

Mathematics Commons

Share

COinS