Mirror Descent for Metric Learning: A Unified Approach

Published in Twenty-Third European Conference on Machine Learning (ECML'12), Bristol, United Kingdom, 2012

Recommended citation: G. Kunapuli and J. W. Shavlik. Mirror Descent for Metric Learning: A Unified Approach. Twenty-Third European Conference on Machine Learning (ECML'12), Bristol, United Kingdom, September 24-29, 2012. http://gkunapuli.github.io/files/12mdmlECML.pdf

Most metric learning methods are characterized by diverse loss functions and projection methods, which naturally begs the question: is there a wider framework that can generalize many of these methods? In addition, ever persistent issues are those of scalability to large data sets and the question of kernelizability. We propose a unified approach to Mahalanobis metric learning: an online regularized metric learning algorithm based on the ideas of composite objective mirror descent (comid). The metric learning problem is formulated as a regularized positive semidefinite matrix learning problem, whose update rules can be derived using the comid framework. This approach aims to be scalable, kernelizable, and admissible to many different types of Bregman and loss functions, which allows for the tailoring of several different classes of algorithms. The most novel contribution is the use of the trace norm, which yields a sparse metric in its eigenspectrum, thus simultaneously performing feature selection along with metric learning.

[BibTeX] [Code]