Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.02969
Cited By
Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features
5 February 2024
Simone Bombari
Marco Mondelli
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features"
4 / 4 papers shown
Title
On the Convergence of Encoder-only Shallow Transformers
Yongtao Wu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
42
5
0
02 Nov 2023
Unlimiformer: Long-Range Transformers with Unlimited Length Input
Amanda Bertsch
Uri Alon
Graham Neubig
Matthew R. Gormley
RALM
96
122
0
02 May 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
298
3,007
0
22 Mar 2023
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
100
227
0
15 Apr 2021
1