ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14820
  4. Cited By
How Smooth Is Attention?

How Smooth Is Attention?

22 December 2023
Valérie Castin
Pierre Ablin
Gabriel Peyré
    AAML
ArXivPDFHTML

Papers citing "How Smooth Is Attention?"

7 / 7 papers shown
Title
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
Jonathan Geuter
Clément Bonet
Anna Korba
David Alvarez-Melis
56
0
0
03 Mar 2025
Clustering in Causal Attention Masking
Clustering in Causal Attention Masking
Nikita Karagodin
Yury Polyanskiy
Philippe Rigollet
54
5
0
07 Nov 2024
Is Smoothness the Key to Robustness? A Comparison of Attention and
  Convolution Models Using a Novel Metric
Is Smoothness the Key to Robustness? A Comparison of Attention and Convolution Models Using a Novel Metric
Baiyuan Chen
MLT
18
0
0
23 Oct 2024
Transformers are Universal In-context Learners
Transformers are Universal In-context Learners
Takashi Furuya
Maarten V. de Hoop
Gabriel Peyré
29
6
0
02 Aug 2024
The Impact of LoRA on the Emergence of Clusters in Transformers
The Impact of LoRA on the Emergence of Clusters in Transformers
Hugo Koubbi
Matthieu Boussard
Louis Hernandez
21
1
0
23 Feb 2024
Globally-Robust Neural Networks
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
80
125
0
16 Feb 2021
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
256
3,108
0
04 Nov 2016
1