98

Pointer: Linear-Complexity Long-Range Modeling without Pre-training

Main:7 Pages
4 Figures
Bibliography:1 Pages
2 Tables
Abstract

We introduce Pointer, a novel architecture that achieves linear O(NK)O(NK) complexity for long-range sequence modeling while maintaining superior performance without requiring pre-training. Unlike standard attention mechanisms that compute O(N2)O(N^2) pairwise interactions, our approach uses layer-wise pointer chaining where each layer's pointer selection depends on previous layer's pointer positions, creating explicit long-distance connections through pointer chains. We demonstrate that this architecture achieves 22--10×10\times speedup on long sequences compared to standard transformers, maintains >95%>95\% accuracy on copy tasks at distances up to 2048 tokens, and learns interpretable pointer patterns that reveal structured dependency modeling. Our experiments on efficiency benchmarks, long-range dependency tasks, and interpretability analysis show that Pointer offers a compelling alternative to attention mechanisms for scenarios requiring efficient long-range modeling without pre-training dependencies.

View on arXiv
Comments on this paper