ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.07197
  4. Cited By
Are Convolutional Neural Networks or Transformers more like human
  vision?

Are Convolutional Neural Networks or Transformers more like human vision?

15 May 2021
Shikhar Tuli
Ishita Dasgupta
Erin Grant
Thomas L. Griffiths
    ViT
    FaML
ArXivPDFHTML

Papers citing "Are Convolutional Neural Networks or Transformers more like human vision?"

25 / 25 papers shown
Title
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Yancheng Cai
Fei Yin
Dounia Hammou
Rafal Mantiuk
VLM
Presented at ResearchTrend Connect | VLM on 14 Mar 2025
142
1
0
13 Mar 2025
Accuracy Improvement of Cell Image Segmentation Using Feedback Former
Accuracy Improvement of Cell Image Segmentation Using Feedback Former
Hinako Mitsuoka
Kazuhiro Hotta
ViT
MedIm
36
0
0
23 Aug 2024
Trapped in texture bias? A large scale comparison of deep instance
  segmentation
Trapped in texture bias? A large scale comparison of deep instance segmentation
J. Theodoridis
Jessica Hofmann
J. Maucher
A. Schilling
SSeg
27
5
0
17 Jan 2024
PlaNet-S: Automatic Semantic Segmentation of Placenta
PlaNet-S: Automatic Semantic Segmentation of Placenta
Shinnosuke Yamamoto
Isso Saito
Eichi Takaya
Ayaka Harigai
Tomomi Sato
Tomoya Kobayashi
Kei Takase
Takuya Ueda
16
0
0
18 Dec 2023
Automated Sperm Assessment Framework and Neural Network Specialized for
  Sperm Video Recognition
Automated Sperm Assessment Framework and Neural Network Specialized for Sperm Video Recognition
T. Fujii
Hayato Nakagawa
T. Takeshima
Y. Yumura
T. Hamagami
28
3
0
10 Nov 2023
Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis
  Screening
Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis Screening
Jiangdong Cai
Honglin Xiong
Mao-Hong Cao
Luyan Liu
Lichi Zhang
Qian Wang
15
4
0
06 Sep 2023
Large-kernel Attention for Efficient and Robust Brain Lesion
  Segmentation
Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation
Liam Chalcroft
Ruben Lourencco Pereira
Mikael Brudfors
Andrew S. Kayser
M. D’Esposito
Cathy J. Price
Ioannis Pappas
John Ashburner
ViT
3DV
MedIm
19
8
0
14 Aug 2023
Two-Stream Regression Network for Dental Implant Position Prediction
Two-Stream Regression Network for Dental Implant Position Prediction
Xinquan Yang
Xuguang Li
Xuechen Li
Wenting Chen
Linlin Shen
X. Li
Yongqiang Deng
18
6
0
17 May 2023
Self-attention in Vision Transformers Performs Perceptual Grouping, Not
  Attention
Self-attention in Vision Transformers Performs Perceptual Grouping, Not Attention
Paria Mehrani
John K. Tsotsos
25
24
0
02 Mar 2023
Transformadores: Fundamentos teoricos y Aplicaciones
Transformadores: Fundamentos teoricos y Aplicaciones
J. D. L. Torre
72
0
0
18 Feb 2023
V1T: large-scale mouse V1 response prediction using a Vision Transformer
V1T: large-scale mouse V1 response prediction using a Vision Transformer
Bryan M. Li
I. M. Cornacchia
Nathalie L Rochefort
A. Onken
24
8
0
06 Feb 2023
Unveiling the Tapestry: the Interplay of Generalization and Forgetting
  in Continual Learning
Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning
Zenglin Shi
Jing Jie
Ying Sun
J. Lim
Mengmi Zhang
CLL
36
1
0
21 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
24
17
0
06 Nov 2022
Delving into Masked Autoencoders for Multi-Label Thorax Disease
  Classification
Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification
Junfei Xiao
Yutong Bai
Alan Yuille
Zongwei Zhou
MedIm
ViT
35
58
0
23 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
38
4
0
07 Oct 2022
Deep Digging into the Generalization of Self-Supervised Monocular Depth
  Estimation
Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation
Ji-Hoon Bae
Sungho Moon
Sunghoon Im
MDE
20
84
0
23 May 2022
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Xiaohan Ding
X. Zhang
Yi Zhou
Jungong Han
Guiguang Ding
Jian-jun Sun
VLM
49
528
0
13 Mar 2022
Joint rotational invariance and adversarial training of a dual-stream
  Transformer yields state of the art Brain-Score for Area V4
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
William Berrios
Arturo Deza
MedIm
ViT
22
13
0
08 Mar 2022
Arbitrary Shape Text Detection using Transformers
Arbitrary Shape Text Detection using Transformers
Z. Raisi
Georges Younes
John S. Zelek
ViT
28
13
0
22 Feb 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
32
465
0
14 Feb 2022
MPViT: Multi-Path Vision Transformer for Dense Prediction
MPViT: Multi-Path Vision Transformer for Dense Prediction
Youngwan Lee
Jonghee Kim
Jeffrey Willette
Sung Ju Hwang
ViT
29
244
0
21 Dec 2021
nnFormer: Interleaved Transformer for Volumetric Segmentation
nnFormer: Interleaved Transformer for Volumetric Segmentation
Hong-Yu Zhou
J. Guo
Yinghao Zhang
Lequan Yu
Liansheng Wang
Yizhou Yu
ViT
MedIm
27
306
0
07 Sep 2021
Partial success in closing the gap between human and machine vision
Partial success in closing the gap between human and machine vision
Robert Geirhos
Kantharaju Narayanappa
Benjamin Mitzkus
Tizian Thieringer
Matthias Bethge
Felix Wichmann
Wieland Brendel
VLM
AAML
40
221
0
14 Jun 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
256
620
0
21 May 2021
Vision Transformers are Robust Learners
Vision Transformers are Robust Learners
Sayak Paul
Pin-Yu Chen
ViT
17
305
0
17 May 2021
1