Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.20098
Cited By
Transformers in Protein: A Survey
26 May 2025
Xiaowen Ling
Zhiqiang Li
Yanbin Wang
Zhuhong You
ViT
MedIm
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformers in Protein: A Survey"
19 / 19 papers shown
Title
Ethereum Fraud Detection via Joint Transaction Language Model and Graph Representation Learning
Yifan Jia
Yanbin Wang
Jianguo Sun
Yiwei Liu
Zhang Sheng
Ye Tian
117
4
0
09 Sep 2024
A Protein Structure Prediction Approach Leveraging Transformer and CNN Integration
Yanlin Zhou
Kai Tan
Xinyu Shen
Zheng He
Haotian Zheng
3DV
ViT
47
12
0
29 Feb 2024
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
316
658
0
21 Apr 2021
Molecular graph generation with Graph Neural Networks
P. Bongini
Monica Bianchini
F. Scarselli
GNN
48
140
0
14 Dec 2020
Profile Prediction: An Alignment-Based Pre-Training Task for Protein Sequence Models
Pascal Sturmfels
Jesse Vig
Ali Madani
Nazneen Rajani
31
25
0
01 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
368
40,217
0
22 Oct 2020
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
Seyone Chithrananda
Gabriel Grand
Bharath Ramsundar
AI4CE
60
395
0
19 Oct 2020
ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Ahmed Elnaggar
M. Heinzinger
Christian Dallago
Ghalia Rehawi
Yu Wang
...
Tamas B. Fehér
Christoph Angerer
Martin Steinegger
D. Bhowmik
B. Rost
DRL
38
932
0
13 Jul 2020
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
168
1,678
0
08 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
498
41,106
0
28 May 2020
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
93
3,996
0
10 Apr 2020
On the Relationship between Self-Attention and Convolutional Layers
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
89
530
0
08 Nov 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
406
24,160
0
26 Jul 2019
Evaluating Protein Transfer Learning with TAPE
Roshan Rao
Nicholas Bhattacharya
Neil Thomas
Yan Duan
Xi Chen
John F. Canny
Pieter Abbeel
Yun S. Song
SSL
79
786
0
19 Jun 2019
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
138
3,714
0
09 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
951
93,936
0
11 Oct 2018
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
53
1,186
0
28 Aug 2017
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
443
129,831
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
538
21,613
0
22 May 2017
1