"Beyond Classification Accuracy - Advanced Strategies for Learning Vector Quantization"
Learning Vector Quantization (LVQ) as introduced by T. Kohonen is an intuitive prototype based learning scheme for classification task. Originally motivated by Bayesian classification decision, Sato & Yamada proposed a cost function for LVQ approximating the classification error but allowing gradient descent learning for the prototypes (GLVQ). However, if imbalanced data classes have to be learned, the classification error does not reflect the performance of a classifier adequately. More appropriate statistical values are contained in the confusion matrix, which should be used for classifier optimization and evaluation. In the talk we discuss variants of LVQ to optimize such values instead of the approximated classification error while keeping both basic properties of GLVQ - prototypes and gradient descent learning. The basic ingredient for those approaches is the border-sensitive GLVQ variant. We will explain these ideas and, additionally, introduce also a LVQ variant optimizing the area under ROC-curves.
Ort: INB Seminarraum