skip to main content
Convex large margin training techniques: unsupervised, semi-supervised, and robust support vector machines
  • Author:
  • Linli Xu
Publisher:
  • University of Waterloo
  • Computer Science Dept. University Avenue Waterloo, Ont. N2L 3G1
  • Canada
ISBN:978-0-494-34559-7
Order Number:AAINR34559
Pages:
138
Bibliometrics
Skip Abstract Section
Abstract

Support vector machines (SVMs) have been a dominant machine learning technique for more than a decade. The intuitive principle behind SVM training is to find the maximum margin separating hyperplane for a given set of binary labeled training data. Previously, SVMs have been primarily applied to supervised learning problems, where target class labels are provided with the data. Developing unsupervised extensions to SVMs, where no class labels are given, turns out to be a challenging problem. In this dissertation, I propose a principled approach for unsupervised and semi-supervised SVM training by formulating convex relaxations of the natural training criterion: find a (constrained) labeling that would yield an optimal SVM classifier on the resulting labeled training data. This relaxation yields a semidefinite program (SDP) that can be solved in polynomial time. The resulting training procedures can be applied to two-class and multi-class problems, and ultimately to the multivariate case, achieving high quality results in each case. In addition to unsupervised training, I also consider the problem of reducing the outlier sensitivity of standard supervised SVM training. Here I show that a similar convex relaxation can be applied to improve the robustness of SVMs by explicitly suppressing outliers in the training process. The proposed approach can achieve superior results to standard SVMs in the presence of outliers.

Contributors
  • University of Alberta

Recommendations