ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02966
19
8

Understanding Recurrent Neural Architectures by Analyzing and Synthesizing Long Distance Dependencies in Benchmark Sequential Datasets

6 October 2018
Abhijit Mahalunkar
John D. Kelleher
ArXivPDFHTML
Abstract

In order to build efficient deep recurrent neural architectures, it is essential to analyze the complexityof long distance dependencies (LDDs) of the dataset being modeled. In this paper, we presentdetailed analysis of the dependency decay curve exhibited by various datasets. The datasets sampledfrom a similar process (e.g. natural language, sequential MNIST, Strictlyk-Piecewise languages,etc) display variations in the properties of the dependency decay curve. Our analysis reveal thefactors resulting in these variations; such as (i) number of unique symbols in a dataset, (ii) size ofthe dataset, (iii) number of interacting symbols within a given LDD, and (iv) the distance betweenthe interacting symbols. We test these factors by generating synthesized datasets of the Strictlyk-Piecewise languages. Another advantage of these synthesized datasets is that they enable targetedtesting of deep recurrent neural architectures in terms of their ability to model LDDs with differentcharacteristics. We also demonstrate that analysing dependency decay curves can inform the selectionof optimal hyper-parameters for SOTA deep recurrent neural architectures. This analysis can directlycontribute to the development of more accurate and efficient sequential models.

View on arXiv
Comments on this paper