Sequential Routing Framework: Fully Capsule Network-based Speech
Recognition
Capsule networks (CapsNets) have recently gotten attention as a novel neural architecture. This paper presents the sequential routing framework which we believe is the first method to adapt a CapsNet-only structure to sequence-to-sequence recognition. Input sequences are capsulized then sliced by a window size. Each slice is classified to a label at the corresponding time through iterative routing mechanisms. Afterwards, losses are computed by connectionist temporal classification (CTC). During routing, learnable weights and iteration outputs are shared across the slices. By sharing the two, the required number of parameters can be controlled by the window size regardless of the length of sequences. Moreover, the method can minimize decoding speed degradation caused by the routing iterations since it can operate in a non-iterative manner without dropping accuracy. The method achieves a similar word error rate of 16.9\% on the Wall Street Journal corpus compared to bidirectional long short-term memory-based CTC networks. On the TIMIT corpus, it attains a 0.7\% lower phone error rate at 17.5\% with less than half the parameters compared to convolutional neural network-based CTC networks (Zhang et al., 2016).
View on arXiv