341

A Riemannian Network for SPD Matrix Learning

AAAI Conference on Artificial Intelligence (AAAI), 2016
Luc Van Gool
Abstract

Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and video processing tasks, thanks to their ability to learn appropriate statistical representations while respecting the Riemannian geometry of the underlying SPD manifold. In this paper we build a Riemannian network to open up a new direction of SPD matrix non-linear learning in a deep architecture. The built network generalizes the Euclidean network paradigm to non-Euclidean SPD manifolds. In particular, we devise bilinear mapping layers to transform input SPD matrices into more desirable SPD matrices, exploit eigenvalue rectification layers to introduce the non-linearity with a non-linear function on the new SPD matrices, and design eigenvalue logarithm layers to perform Log-Euclidean Riemannian computing on the resulting SPD matrices for regular output layers. For training the deep network, we propose a Riemannian matrix backpropagation by exploiting a variant of stochastic gradient descent on Stiefel manifolds where the network weights reside on. We show through experiments that the proposed SPD network can be simply trained and outperform existing SPD matrix learning and state-of-the-art methods in three typical visual classification tasks.

View on arXiv
Comments on this paper