ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.04646
  4. Cited By
DeepSITH: Efficient Learning via Decomposition of What and When Across
  Time Scales
v1v2 (latest)

DeepSITH: Efficient Learning via Decomposition of What and When Across Time Scales

Neural Information Processing Systems (NeurIPS), 2021
9 April 2021
Brandon G. Jacques
Zoran Tiganj
Marc W Howard
P. Sederberg
ArXiv (abs)PDFHTMLGithub

Papers citing "DeepSITH: Efficient Learning via Decomposition of What and When Across Time Scales"

3 / 3 papers shown
Gradual Forgetting: Logarithmic Compression for Extending Transformer Context Windows
Gradual Forgetting: Logarithmic Compression for Extending Transformer Context Windows
Billy Dickson
Zoran Tiganj
CLL
156
1
0
25 Oct 2025
Traveling Waves Encode the Recent Past and Enhance Sequence Learning
Traveling Waves Encode the Recent Past and Enhance Sequence LearningInternational Conference on Learning Representations (ICLR), 2023
Thomas Anderson Keller
L. Muller
T. Sejnowski
Max Welling
AI4TS
491
25
0
03 Sep 2023
A deep convolutional neural network that is invariant to time rescaling
A deep convolutional neural network that is invariant to time rescalingInternational Conference on Machine Learning (ICML), 2021
Brandon G. Jacques
Zoran Tiganj
Aakash Sarkar
Marc W Howard
P. Sederberg
AI4TS
183
10
0
09 Jul 2021
1
Page 1 of 1