Author

Wei Shen

Year of Award

8-19-2019

Degree Type

Thesis

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Mathematics.

Principal Supervisor

Tong, Tiejun

Keywords

Machine learning ; Mathematical optimization ; Stability

Language

English

Abstract

In this thesis, we study the stability and its trade-off with optimization error for stochastic gradient descent (SGD) algorithms in the pairwise learning setting. Pairwise learning refers to a learning task which involves a loss function depending on pairs of instances among which notable examples are bipartite ranking, metric learning, area under ROC curve (AUC) maximization and minimum error entropy (MEE) principle. Our contribution is twofold. Firstly, we establish the stability results for SGD for pairwise learning in the convex, strongly convex and non-convex settings, from which generalization errors can be naturally derived. Moreover, we also give the stability results of buffer-based SGD and projected SGD. Secondly, we establish the trade-off between stability and optimization error of SGD algorithms for pairwise learning. This is achieved by lower-bounding the sum of stability and optimization error by the minimax statistical error over a prescribed class of pairwise loss functions. From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses. In addition, we illustrate our stability results by giving some specific examples and experiments of AUC maximization and MEE.

Comments

Principal supervisor: Dr. Tong Tiejun ; Thesis submitted to the Department of Mathematics ; Thesis (Ph.D.)--Hong Kong Baptist University, 2019.

Bibliography

Includes bibliographical references (pages 106-110)

Available for download on Sunday, October 17, 2021



Share

COinS