Learning to Combine Distances for Complex Representations

Posted in Science on September 19, 2008


Learning to Combine Distances for Complex Representations

The k-Nearest Neighbors algorithm can be easily adapted to classify complex objects (e.g. sets, graphs) as long as a proper dissimilarity function is given over an input space. Both the representation of the learning instances and the dissimilarity employed on that representation should be determined on the basis of domain knowledge. However, even in the presence of domain knowledge, it can be far from obvious which complex representation should be used or which dissimilarity should be applied on the chosen representation. In this paper we present a framework that allows to combine different complex representations of a given learning problem and/or different dissimilarities defined on these representations. We build on ideas developed previously on metric learning for vectorial data. We demonstrate the utility of our method in domains in which the learning instances are represented as sets of vectors by learning how to combine different set distance measures.

Author: Adam Woznica, University Of Geneva

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net, Structured Output, Structured data