Skip to Content

Dimensionality Reduction by Self Organizing Maps that preserve distances in the Output Space

Congresses name: 

International Joint Conference on Neural Networks IJCNN09


Atlanta - U.S.A.

14-19 June 2009

Dimensionality Reduction is a key issue in many scientific problems, in which data is originally given by high dimensional vectors, all of which lie however  over a fewer dimensional manifold. Therefore, they can be represented by a reduced number of values that parametrize their position over the mentioned non-linear manifold. This dimensionality reduction is essential not only for representing and managing data, but also for its understanding at a high interpretation level, similar to the way it is performed by the mammal cortex. This paper presents an algorithm for representing the data that lie on a non-linear manifold by the reduced number of their coordinates along a grid or map of neurons extended over this manifold. This map is generated by a Self-organization learning process whose key feature is the fact that  the winning neuron is selected in order to preserve distances of input data when they are represented by their coordinates in the output map. Unlike other methods, the proposed algorithm has important features, that namely the intrinsic dimensionality is obtained simultaneously in the learning process itself, it doesn't require a long course positioning phase, and it seeks to maintain the data structure from the beginning, not leaving it as an ulterior fact to be proven. The algorithm has proven to efficiently solve classical dimensionality reduction problems, and has also showed that it can be useful for realistic problems, such as face images classification or document indexing.

campoy ijcnn09.pdf989.15 KB