22
7

Learning Sparse Neural Networks via 0\ell_0 and T1\ell_1 by a Relaxed Variable Splitting Method with Application to Multi-scale Curve Classification

Abstract

We study sparsification of convolutional neural networks (CNN) by a relaxed variable splitting method of 0\ell_0 and transformed-1\ell_1 (T1\ell_1) penalties, with application to complex curves such as texts written in different fonts, and words written with trembling hands simulating those of Parkinson's disease patients. The CNN contains 3 convolutional layers, each followed by a maximum pooling, and finally a fully connected layer which contains the largest number of network weights. With 0\ell_0 penalty, we achieved over 99 \% test accuracy in distinguishing shaky vs. regular fonts or hand writings with above 86 \% of the weights in the fully connected layer being zero. Comparable sparsity and test accuracy are also reached with a proper choice of T1\ell_1 penalty.

View on arXiv
Comments on this paper