Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.13154
Cited By
Attention Mechanism in Neural Networks: Where it Comes and Where it Goes
27 April 2022
Derya Soydaner
3DV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Attention Mechanism in Neural Networks: Where it Comes and Where it Goes"
25 / 25 papers shown
Title
QLLM: Do We Really Need a Mixing Network for Credit Assignment in Multi-Agent Reinforcement Learning?
Zhouyang Jiang
Bin Zhang
Airong Wei
Zhiwei Xu
OffRL
32
0
0
17 Apr 2025
RF-DETR Object Detection vs YOLOv12 : A Study of Transformer-based and CNN-based Architectures for Single-Class and Multi-Class Greenfruit Detection in Complex Orchard Environments Under Label Ambiguity
Ranjan Sapkota
Rahul Harsha Cheppally
Ajay Sharda
Manoj Karkee
31
0
0
17 Apr 2025
Quattro: Transformer-Accelerated Iterative Linear Quadratic Regulator Framework for Fast Trajectory Optimization
Yue Wang
Hoayu Wang
Zhaoxing Li
44
0
0
02 Apr 2025
BrainNet-MoE: Brain-Inspired Mixture-of-Experts Learning for Neurological Disease Identification
Jing Zhang
Xiaowei Yu
Tong Chen
Chao-Yang Cao
Mingheng Chen
...
Yanjun Lyu
Lu Zhang
Li Su
Tianming Liu
D. Zhu
42
0
0
05 Mar 2025
A Survey of Link Prediction in Temporal Networks
Jiafeng Xiong
Ahmad Zareie
Rizos Sakellariou
AI4TS
AI4CE
34
1
0
28 Feb 2025
Integrating Biological and Machine Intelligence: Attention Mechanisms in Brain-Computer Interfaces
J. Wang
Weishan Ye
Jialin He
Li Zhang
G. Huang
Zhuliang Yu
Zhen Liang
75
0
0
26 Feb 2025
SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech
Minchan Kim
Myeonghun Jeong
Joun Yeop Lee
Nam Soo Kim
16
0
0
07 Oct 2024
A dynamic vision sensor object recognition model based on trainable event-driven convolution and spiking attention mechanism
Peng Zheng
Qian Zhou
BDL
26
0
0
19 Sep 2024
Deep Analysis of Time Series Data for Smart Grid Startup Strategies: A Transformer-LSTM-PSO Model Approach
Zecheng Zhang
27
8
0
22 Aug 2024
Graph Transformers: A Survey
Ahsan Shehzad
Feng Xia
Shagufta Abid
Ciyuan Peng
Shuo Yu
Dongyu Zhang
Karin Verspoor
AI4CE
29
9
0
13 Jul 2024
A Review of Graph Neural Networks in Epidemic Modeling
Zewen Liu
Guancheng Wan
B. A. Prakash
Max S. Y. Lau
Wei-dong Jin
AI4CE
29
36
0
28 Mar 2024
Utilizing Neural Transducers for Two-Stage Text-to-Speech via Semantic Token Prediction
Minchan Kim
Myeonghun Jeong
Byoung Jin Choi
Semin Kim
Joun Yeop Lee
Nam Soo Kim
AI4TS
18
1
0
03 Jan 2024
Universal Deoxidation of Semiconductor Substrates Assisted by Machine-Learning and Real-Time-Feedback-Control
Chaorong Shen
Wenkang Zhan
Jian Tang
Zhaofeng Wu
Bop Xu
Chao Zhao
Zhanguo Wang
11
0
0
04 Dec 2023
The Potential of Wearable Sensors for Assessing Patient Acuity in Intensive Care Unit (ICU)
Jessica Sena
MD Tahsin Mostafiz
Jiaqing Zhang
Andrea Davidson
S. Bandyopadhyay
...
B. Shickel
Tyler J. Loftus
William Robson Schwartz
A. Bihorac
Parisa Rashidi
24
1
0
03 Nov 2023
A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks
Saidul Islam
Hanae Elmekki
Ahmed Elsebai
Jamal Bentahar
Najat Drawel
Gaith Rjoub
Witold Pedrycz
ViT
MedIm
16
167
0
11 Jun 2023
Cross-Domain Car Detection Model with Integrated Convolutional Block Attention Mechanism
Haoxuan Xu
Songning Lai
Xianyang Li
Y. Yang
ViT
10
15
0
31 May 2023
From paintbrush to pixel: A review of deep neural networks in AI-generated art
Anne-Sofie Maerten
Derya Soydaner
25
22
0
14 Feb 2023
LARF: Two-level Attention-based Random Forests with a Mixture of Contamination Models
A. Konstantinov
Lev V. Utkin
25
0
0
11 Oct 2022
Attention and Self-Attention in Random Forests
Lev V. Utkin
A. Konstantinov
16
3
0
09 Jul 2022
OptGAN: Optimizing and Interpreting the Latent Space of the Conditional Text-to-Image GANs
Zhenxing Zhang
Lambert Schomaker
17
5
0
25 Feb 2022
Bayesian Attention Modules
Xinjie Fan
Shujian Zhang
Bo Chen
Mingyuan Zhou
107
59
0
20 Oct 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
578
0
12 Mar 2020
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
225
571
0
12 Sep 2019
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,687
0
17 Aug 2015
1