Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.20098
Cited By
Transformers in Protein: A Survey
26 May 2025
Xiaowen Ling
Zhiqiang Li
Yanbin Wang
Zhuhong You
ViT
MedIm
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformers in Protein: A Survey"
15 / 15 papers shown
Title
Ethereum Fraud Detection via Joint Transaction Language Model and Graph Representation Learning
Yifan Jia
Yanbin Wang
Jianguo Sun
Yiwei Liu
Zhang Sheng
Ye Tian
117
4
0
09 Sep 2024
A Protein Structure Prediction Approach Leveraging Transformer and CNN Integration
Yanlin Zhou
Kai Tan
Xinyu Shen
Zheng He
Haotian Zheng
3DV
ViT
43
12
0
29 Feb 2024
Molecular graph generation with Graph Neural Networks
P. Bongini
Monica Bianchini
F. Scarselli
GNN
46
140
0
14 Dec 2020
Profile Prediction: An Alignment-Based Pre-Training Task for Protein Sequence Models
Pascal Sturmfels
Jesse Vig
Ali Madani
Nazneen Rajani
31
25
0
01 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
314
40,217
0
22 Oct 2020
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
Seyone Chithrananda
Gabriel Grand
Bharath Ramsundar
AI4CE
60
395
0
19 Oct 2020
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
154
1,678
0
08 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
461
41,106
0
28 May 2020
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
82
3,996
0
10 Apr 2020
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
390
24,160
0
26 Jul 2019
Evaluating Protein Transfer Learning with TAPE
Roshan Rao
Nicholas Bhattacharya
Neil Thomas
Yan Duan
Xi Chen
John F. Canny
Pieter Abbeel
Yun S. Song
SSL
77
786
0
19 Jun 2019
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
133
3,707
0
09 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
882
93,936
0
11 Oct 2018
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
53
1,186
0
28 Aug 2017
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
427
129,831
0
12 Jun 2017
1