ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.04886
  4. Cited By
Lipschitz Normalization for Self-Attention Layers with Application to
  Graph Neural Networks

Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks

8 March 2021
George Dasoulas
Kevin Scaman
Aladin Virmaux
    GNN
ArXivPDFHTML

Papers citing "Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks"

10 / 10 papers shown
Title
Effects of Random Edge-Dropping on Over-Squashing in Graph Neural Networks
Effects of Random Edge-Dropping on Over-Squashing in Graph Neural Networks
Jasraj Singh
Keyue Jiang
Brooks Paige
Laura Toni
70
1
0
11 Feb 2025
How Smooth Is Attention?
How Smooth Is Attention?
Valérie Castin
Pierre Ablin
Gabriel Peyré
AAML
40
9
0
22 Dec 2023
Graph Convolutions Enrich the Self-Attention in Transformers!
Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi
Hyowon Wi
Jayoung Kim
Yehjin Shin
Kookjin Lee
Nathaniel Trask
Noseong Park
25
4
0
07 Dec 2023
CLiSA: A Hierarchical Hybrid Transformer Model using Orthogonal Cross
  Attention for Satellite Image Cloud Segmentation
CLiSA: A Hierarchical Hybrid Transformer Model using Orthogonal Cross Attention for Satellite Image Cloud Segmentation
Subhajit Paul
Ashutosh Gupta
19
2
0
29 Nov 2023
SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
  Garments
SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging Garments
Tianxing Li
Rui Shi
Qing Zhu
T. Kanai
29
1
0
05 Aug 2023
Supervised Attention Using Homophily in Graph Neural Networks
Supervised Attention Using Homophily in Graph Neural Networks
Michail Chatzianastasis
Giannis Nikolentzos
Michalis Vazirgiannis
GNN
16
0
0
11 Jul 2023
Centered Self-Attention Layers
Centered Self-Attention Layers
Ameen Ali
Tomer Galanti
Lior Wolf
28
6
0
02 Jun 2023
Adaptive Depth Graph Attention Networks
Adaptive Depth Graph Attention Networks
Jingbo Zhou
Yixuan Du
Ruqiong Zhang
Rui Zhang
GNN
36
1
0
16 Jan 2023
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
  Vector Quantization
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization
Mucong Ding
Kezhi Kong
Jingling Li
Chen Zhu
John P. Dickerson
Furong Huang
Tom Goldstein
GNN
MQ
27
47
0
27 Oct 2021
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,923
0
17 Aug 2015
1