ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00364
  4. Cited By
Pre-Trained Image Processing Transformer

Pre-Trained Image Processing Transformer

1 December 2020
Hanting Chen
Yunhe Wang
Tianyu Guo
Chang Xu
Yiping Deng
Zhenhua Liu
Siwei Ma
Chunjing Xu
Chao Xu
Wen Gao
    VLM
    ViT
ArXivPDFHTML

Papers citing "Pre-Trained Image Processing Transformer"

50 / 220 papers shown
Title
EmbryosFormer: Deformable Transformer and Collaborative
  Encoding-Decoding for Embryos Stage Development Classification
EmbryosFormer: Deformable Transformer and Collaborative Encoding-Decoding for Embryos Stage Development Classification
Tien-Phat Nguyen
Trong-Thang Pham
Tri Minh Nguyen
H. Le
Dung Nguyen
Hau Lam
Phong H. Nguyen
Jennifer Fowler
Minh-Triet Tran
Ngan Le
ViT
30
13
0
07 Oct 2022
Rethinking Blur Synthesis for Deep Real-World Image Deblurring
Rethinking Blur Synthesis for Deep Real-World Image Deblurring
Hao Wei
Chenyang Ge
Xin Qiao
Pengchao Deng
17
0
0
28 Sep 2022
Modular Degradation Simulation and Restoration for Under-Display Camera
Modular Degradation Simulation and Restoration for Under-Display Camera
Yang Zhou
Yuda Song
Xin Du
27
11
0
23 Sep 2022
Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and
  Restoration
Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration
Marcos V. Conde
Ui-Jin Choi
Maxime Burchi
Radu Timofte
ViT
49
135
0
22 Sep 2022
DMTNet: Dynamic Multi-scale Network for Dual-pixel Images Defocus
  Deblurring with Transformer
DMTNet: Dynamic Multi-scale Network for Dual-pixel Images Defocus Deblurring with Transformer
Dafeng Zhang
Xiaobing Wang
ViT
22
11
0
13 Sep 2022
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
43
32
0
13 Sep 2022
Generative Adversarial Super-Resolution at the Edge with Knowledge
  Distillation
Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation
Simone Angarano
Francesco Salvetti
Mauro Martini
Marcello Chiaberge
GAN
20
21
0
07 Sep 2022
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the
  Best of Both Students
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the Best of Both Students
Xueye Zheng
Yuan Luo
Hao Wang
Chong Fu
Lin Wang
ViT
36
17
0
06 Sep 2022
CNSNet: A Cleanness-Navigated-Shadow Network for Shadow Removal
CNSNet: A Cleanness-Navigated-Shadow Network for Shadow Removal
Qianhao Yu
Naishan Zheng
Jie Huang
Fengmei Zhao
17
13
0
06 Sep 2022
SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and
  Improved Training for Image Super-Resolution
SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and Improved Training for Image Super-Resolution
Dafeng Zhang
Feiyu Huang
Shizhuo Liu
Xiaobing Wang
Zhezhu Jin
16
88
0
24 Aug 2022
HST: Hierarchical Swin Transformer for Compressed Image Super-resolution
HST: Hierarchical Swin Transformer for Compressed Image Super-resolution
B. Li
Xin Li
Yiting Lu
Sen Liu
Ruoyu Feng
Zhibo Chen
21
33
0
21 Aug 2022
Improved Image Classification with Token Fusion
Improved Image Classification with Token Fusion
Keong-Hun Choi
Jin-Woo Kim
Yaolong Wang
J. Ha
ViT
17
0
0
19 Aug 2022
Rain Removal from Light Field Images with 4D Convolution and Multi-scale
  Gaussian Process
Rain Removal from Light Field Images with 4D Convolution and Multi-scale Gaussian Process
Zhiqiang Yuan
Jianhua Zhang
Yilin Ji
G. Pedersen
W. Fan
23
17
0
16 Aug 2022
MVSFormer: Multi-View Stereo by Learning Robust Image Features and
  Temperature-based Depth
MVSFormer: Multi-View Stereo by Learning Robust Image Features and Temperature-based Depth
Chenjie Cao
Xinlin Ren
Yanwei Fu
20
45
0
04 Aug 2022
DnSwin: Toward Real-World Denoising via Continuous Wavelet
  Sliding-Transformer
DnSwin: Toward Real-World Denoising via Continuous Wavelet Sliding-Transformer
Hao Li
Zhijing Yang
Xiaobin Hong
Ziying Zhao
Junyang Chen
Yukai Shi
Jin-shan Pan
DiffM
ViT
31
11
0
28 Jul 2022
Is Attention All That NeRF Needs?
Is Attention All That NeRF Needs?
T. MukundVarma
Peihao Wang
Xuxi Chen
Tianlong Chen
Subhashini Venugopalan
Zhangyang Wang
ViT
18
107
0
27 Jul 2022
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Yingyi Chen
Xiaoke Shen
Yahui Liu
Qinghua Tao
Johan A. K. Suykens
AAML
ViT
21
22
0
25 Jul 2022
High-Resolution Swin Transformer for Automatic Medical Image
  Segmentation
High-Resolution Swin Transformer for Automatic Medical Image Segmentation
Chen Wei
Shenghan Ren
Kaitai Guo
Haihong Hu
Jimin Liang
ViT
OOD
MedIm
17
36
0
23 Jul 2022
Global-Local Stepwise Generative Network for Ultra High-Resolution Image
  Restoration
Global-Local Stepwise Generative Network for Ultra High-Resolution Image Restoration
Xin Feng
Haobo Ji
Wenjie Pei
Fanglin Chen
Guangming Lu
24
4
0
16 Jul 2022
Heuristic-free Optimization of Force-Controlled Robot Search Strategies
  in Stochastic Environments
Heuristic-free Optimization of Force-Controlled Robot Search Strategies in Stochastic Environments
Bastian Alt
Darko Katic
Rainer Jäkel
Michael Beetz
9
6
0
15 Jul 2022
Learning Parallax Transformer Network for Stereo Image JPEG Artifacts
  Removal
Learning Parallax Transformer Network for Stereo Image JPEG Artifacts Removal
Xuhao Jiang
Weimin Tan
Ri Cheng
Shili Zhou
Bo Yan
ViT
11
6
0
15 Jul 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
46
94
0
04 Jul 2022
Faster Diffusion Cardiac MRI with Deep Learning-based breath hold
  reduction
Faster Diffusion Cardiac MRI with Deep Learning-based breath hold reduction
Michael Tanzer
Pedro F. Ferreira
Andrew D. Scott
Z. Khalique
Maria Dwornik
D. Pennell
Guang Yang
Daniel Rueckert
S. Nielles-Vallespin
MedIm
15
3
0
21 Jun 2022
EATFormer: Improving Vision Transformer Inspired by Evolutionary
  Algorithm
EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm
Jiangning Zhang
Xiangtai Li
Yabiao Wang
Chengjie Wang
Yibo Yang
Yong Liu
Dacheng Tao
ViT
30
32
0
19 Jun 2022
Multimodal Learning with Transformers: A Survey
Multimodal Learning with Transformers: A Survey
P. Xu
Xiatian Zhu
David A. Clifton
ViT
41
525
0
13 Jun 2022
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
Wei Li
Qiming Zhang
Jing Zhang
Zhen Huang
Xinmei Tian
Dacheng Tao
34
21
0
11 Jun 2022
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
  Compressive Imaging
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging
Yuanhao Cai
Jing Lin
Haoqian Wang
Xin Yuan
Henghui Ding
Yulun Zhang
Radu Timofte
Luc Van Gool
70
116
0
20 May 2022
MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
  with Multi-Stage Fusion
MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer with Multi-Stage Fusion
Jing Wang
Haotian Fa
X. Hou
Yitian Xu
Tao Li
X. Lu
Lean Fu
25
21
0
20 May 2022
Dense residual Transformer for image denoising
Dense residual Transformer for image denoising
Chao Yao
Shuo Jin
Meiqin Liu
Xiaojuan Ban
ViT
28
29
0
14 May 2022
Activating More Pixels in Image Super-Resolution Transformer
Activating More Pixels in Image Super-Resolution Transformer
Xiangyu Chen
Xintao Wang
Jiantao Zhou
Yu Qiao
Chao Dong
ViT
59
600
0
09 May 2022
Coarse-to-Fine Video Denoising with Dual-Stage Spatial-Channel
  Transformer
Coarse-to-Fine Video Denoising with Dual-Stage Spatial-Channel Transformer
Wu Yun
Mengshi Qi
Chuanming Wang
Huiyuan Fu
Huadong Ma
ViT
11
6
0
30 Apr 2022
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer
  for Missing Data Imputation
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
Jiang Liu
Srivathsa Pasumarthi
B. Duffy
Enhao Gong
Keshav Datta
Greg Zaharchuk
ViT
MedIm
11
53
0
28 Apr 2022
Lightweight Bimodal Network for Single-Image Super-Resolution via
  Symmetric CNN and Recursive Transformer
Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer
Guangwei Gao
Z. Wang
Juncheng Li
Wenjie Li
Yi Yu
T. Zeng
SupR
30
93
0
28 Apr 2022
A Multi-Head Convolutional Neural Network With Multi-path Attention
  improves Image Denoising
A Multi-Head Convolutional Neural Network With Multi-path Attention improves Image Denoising
Jiahong Zhang
Meijun Qu
Ye Wang
Lihong Cao
11
5
0
27 Apr 2022
Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
  Deblurring
Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Youjian Zhang
Chaoyue Wang
Dacheng Tao
17
4
0
26 Apr 2022
MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral
  Reconstruction
MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction
Yuanhao Cai
Jing Lin
Zudi Lin
Haoqian Wang
Yulun Zhang
Hanspeter Pfister
Radu Timofte
Luc Van Gool
19
170
0
17 Apr 2022
Simple Baselines for Image Restoration
Simple Baselines for Image Restoration
Liangyu Chen
Xiaojie Chu
X. Zhang
Jian-jun Sun
48
831
0
10 Apr 2022
Multi-Task Distributed Learning using Vision Transformer with Random
  Patch Permutation
Multi-Task Distributed Learning using Vision Transformer with Random Patch Permutation
Sangjoon Park
Jong Chul Ye
FedML
MedIm
30
19
0
07 Apr 2022
Improving Vision Transformers by Revisiting High-frequency Components
Improving Vision Transformers by Revisiting High-frequency Components
Jiawang Bai
Liuliang Yuan
Shutao Xia
Shuicheng Yan
Zhifeng Li
W. Liu
ViT
8
90
0
03 Apr 2022
Rethinking Portrait Matting with Privacy Preserving
Rethinking Portrait Matting with Privacy Preserving
Sihan Ma
Jizhizi Li
Jing Zhang
He-jun Zhang
Dacheng Tao
18
23
0
31 Mar 2022
Fine-tuning Image Transformers using Learnable Memory
Fine-tuning Image Transformers using Learnable Memory
Mark Sandler
A. Zhmoginov
Max Vladymyrov
Andrew Jackson
ViT
13
47
0
29 Mar 2022
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video
  Super-Resolution
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution
Z. Geng
Luming Liang
Tianyu Ding
Ilya Zharkov
17
68
0
27 Mar 2022
Give Me Your Attention: Dot-Product Attention Considered Harmful for
  Adversarial Patch Robustness
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Giulio Lovisotto
Nicole Finnie
Mauricio Muñoz
Chaithanya Kumar Mummadi
J. H. Metzen
AAML
ViT
17
32
0
25 Mar 2022
Meta-attention for ViT-backed Continual Learning
Meta-attention for ViT-backed Continual Learning
Mengqi Xue
Haofei Zhang
Jie Song
Mingli Song
CLL
20
41
0
22 Mar 2022
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
Qing Cai
Yiming Qian
Jinxing Li
Junjie Lv
Yee-Hong Yang
Feng Wu
Dafan Zhang
17
28
0
19 Mar 2022
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
WegFormer: Transformers for Weakly Supervised Semantic Segmentation
Chunmeng Liu
Enze Xie
Wenjia Wang
Wenhai Wang
Guangya Li
Ping Luo
ViT
22
6
0
16 Mar 2022
HUMUS-Net: Hybrid unrolled multi-scale network architecture for
  accelerated MRI reconstruction
HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction
Zalan Fabian
Berk Tinaz
Mahdi Soltanolkotabi
25
50
0
15 Mar 2022
Deep Transformers Thirst for Comprehensive-Frequency Data
Deep Transformers Thirst for Comprehensive-Frequency Data
R. Xia
Chao Xue
Boyu Deng
Fang Wang
Jingchao Wang
ViT
25
0
0
14 Mar 2022
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Xiaohan Ding
X. Zhang
Yi Zhou
Jungong Han
Guiguang Ding
Jian-jun Sun
VLM
47
525
0
13 Mar 2022
The Principle of Diversity: Training Stronger Vision Transformers Calls
  for Reducing All Levels of Redundancy
The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Tianlong Chen
Zhenyu (Allen) Zhang
Yu Cheng
Ahmed Hassan Awadallah
Zhangyang Wang
ViT
27
37
0
12 Mar 2022
Previous
12345
Next