ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.08606
  4. Cited By
Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study
v1v2 (latest)

Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study

26 June 2017
Samuel Ritter
David Barrett
Adam Santoro
M. Botvinick
ArXiv (abs)PDFHTML

Papers citing "Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study"

50 / 58 papers shown
Title
CogLM: Tracking Cognitive Development of Large Language Models
CogLM: Tracking Cognitive Development of Large Language Models
Xinglin Wang
Peiwen Yuan
Shaoxiong Feng
Yiwei Li
Boyuan Pan
Heda Wang
Yao Hu
Kan Li
ELM
160
1
0
17 Aug 2024
Towards a Psychology of Machines: Large Language Models Predict Human
  Memory
Towards a Psychology of Machines: Large Language Models Predict Human Memory
Markus Huff
Elanur Ulakçi
77
4
0
08 Mar 2024
A novel feature-scrambling approach reveals the capacity of
  convolutional neural networks to learn spatial relations
A novel feature-scrambling approach reveals the capacity of convolutional neural networks to learn spatial relations
A. Farahat
Felix Effenberger
M. Vinck
91
5
0
12 Dec 2022
Mechanistic Mode Connectivity
Mechanistic Mode Connectivity
Ekdeep Singh Lubana
Eric J. Bigelow
Robert P. Dick
David M. Krueger
Hidenori Tanaka
122
49
0
15 Nov 2022
The Role of Explanatory Value in Natural Language Processing
The Role of Explanatory Value in Natural Language Processing
Kees van Deemter
XAI
32
0
0
13 Sep 2022
Gaussian Process Surrogate Models for Neural Networks
Gaussian Process Surrogate Models for Neural Networks
Michael Y. Li
Erin Grant
Thomas Griffiths
BDLSyDa
109
8
0
11 Aug 2022
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models
Abutting Grating Illusion: Cognitive Challenge to Neural Network Models
Jinyu Fan
Yi Zeng
AAML
61
1
0
08 Aug 2022
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLMLRM
136
188
0
14 Jul 2022
Using cognitive psychology to understand GPT-3
Using cognitive psychology to understand GPT-3
Marcel Binz
Eric Schulz
ELMLLMAG
351
492
0
21 Jun 2022
InBiaseD: Inductive Bias Distillation to Improve Generalization and
  Robustness through Shape-awareness
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness
Shruthi Gowda
Bahram Zonooz
Elahe Arani
73
5
0
12 Jun 2022
POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution
  Samples
POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples
Duong H. Le
Khoi Duc Minh Nguyen
Khoi Nguyen
Quoc-Huy Tran
Rang Nguyen
Binh-Son Hua
OODD
95
40
0
08 Jun 2022
Disentangling Abstraction from Statistical Pattern Matching in Human and
  Machine Learning
Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning
Sreejan Kumar
Ishita Dasgupta
Nathaniel D. Daw
Jonathan Cohen
Thomas Griffiths
101
10
0
04 Apr 2022
A Developmentally-Inspired Examination of Shape versus Texture Bias in
  Machines
A Developmentally-Inspired Examination of Shape versus Texture Bias in Machines
Alexa R. Tartaglini
Wai Keen Vong
Brenden M. Lake
64
14
0
16 Feb 2022
Repaint: Improving the Generalization of Down-Stream Visual Tasks by
  Generating Multiple Instances of Training Examples
Repaint: Improving the Generalization of Down-Stream Visual Tasks by Generating Multiple Instances of Training Examples
Amin Banitalebi-Dehkordi
Yong Zhang
72
7
0
20 Oct 2021
Predicting decision-making in the future: Human versus Machine
Predicting decision-making in the future: Human versus Machine
H. Ryu
Uijong Ju
C. Wallraven
3DH
65
0
0
09 Oct 2021
Distinguishing rule- and exemplar-based generalization in learning
  systems
Distinguishing rule- and exemplar-based generalization in learning systems
Ishita Dasgupta
Erin Grant
Thomas Griffiths
88
16
0
08 Oct 2021
The Emergence of the Shape Bias Results from Communicative Efficiency
The Emergence of the Shape Bias Results from Communicative Efficiency
Eva Portelance
Michael C. Frank
Dan Jurafsky
Alessandro Sordoni
Romain Laroche
147
19
0
13 Sep 2021
Shape-Biased Domain Generalization via Shock Graph Embeddings
Shape-Biased Domain Generalization via Shock Graph Embeddings
M. Narayanan
Vickram Rajendran
Benjamin Kimia
75
14
0
13 Sep 2021
IFBiD: Inference-Free Bias Detection
IFBiD: Inference-Free Bias Detection
Ignacio Serna
Daniel DeAlcala
Aythami Morales
Julian Fierrez
J. Ortega-Garcia
CVBM
109
11
0
09 Sep 2021
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Rishabh Agarwal
Max Schwarzer
Pablo Samuel Castro
Aaron Courville
Marc G. Bellemare
OffRL
196
680
0
30 Aug 2021
Exploring Corruption Robustness: Inductive Biases in Vision Transformers
  and MLP-Mixers
Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
Katelyn Morrison
B. Gilby
Colton Lipchak
Adam Mattioli
Adriana Kovashka
ViT
80
17
0
24 Jun 2021
Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent
  Biases
Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Shashi Kant Gupta
Mengmi Zhang
Chia-Chien Wu
J. Wolfe
Gabriel Kreiman
ObjD
69
20
0
05 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution
  Generalization?
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
99
96
0
05 Jun 2021
A Procedural World Generation Framework for Systematic Evaluation of
  Continual Learning
A Procedural World Generation Framework for Systematic Evaluation of Continual Learning
Timm Hess
Martin Mundt
Iuliia Pliushch
Visvanathan Ramesh
75
7
0
04 Jun 2021
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
111
65
0
05 Nov 2020
Informative Dropout for Robust Representation Learning: A Shape-bias
  Perspective
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective
Baifeng Shi
Dinghuai Zhang
Qi Dai
Zhanxing Zhu
Yadong Mu
Jingdong Wang
OOD
72
112
0
10 Aug 2020
What they do when in doubt: a study of inductive biases in seq2seq
  learners
What they do when in doubt: a study of inductive biases in seq2seq learners
Eugene Kharitonov
Rahma Chaabouni
79
27
0
26 Jun 2020
Teaching CNNs to mimic Human Visual Cognitive Process & regularise
  Texture-Shape bias
Teaching CNNs to mimic Human Visual Cognitive Process & regularise Texture-Shape bias
Satyam Mohla
Anshul Nasery
Biplab Banerjee
41
3
0
25 Jun 2020
A neural network walks into a lab: towards using deep nets as models for
  human behavior
A neural network walks into a lab: towards using deep nets as models for human behavior
Wei-Ying Ma
B. Peters
HAIAI4CE
94
55
0
02 May 2020
Five Points to Check when Comparing Visual Perception in Humans and
  Machines
Five Points to Check when Comparing Visual Perception in Humans and Machines
Christina M. Funke
Judy Borowski
Karolina Stosio
Wieland Brendel
Thomas S. A. Wallis
Matthias Bethge
73
33
0
20 Apr 2020
Can you hear me $\textit{now}$? Sensitive comparisons of human and
  machine perception
Can you hear me now\textit{now}now? Sensitive comparisons of human and machine perception
Michael A. Lepori
C. Firestone
AAML
69
9
0
27 Mar 2020
Visual Privacy Protection via Mapping Distortion
Visual Privacy Protection via Mapping Distortion
Yiming Li
Peidong Liu
Yong Jiang
Shutao Xia
120
11
0
05 Nov 2019
Increasing Shape Bias in ImageNet-Trained Networks Using Transfer
  Learning and Domain-Adversarial Methods
Increasing Shape Bias in ImageNet-Trained Networks Using Transfer Learning and Domain-Adversarial Methods
Françis Brochu
GAN
47
11
0
30 Jul 2019
Mutual exclusivity as a challenge for deep neural networks
Mutual exclusivity as a challenge for deep neural networks
Kanishk Gandhi
Brenden M. Lake
91
14
0
24 Jun 2019
REPAIR: Removing Representation Bias by Dataset Resampling
REPAIR: Removing Representation Bias by Dataset Resampling
Yi Li
Nuno Vasconcelos
FaML
86
287
0
16 Apr 2019
An Analysis of Pre-Training on Object Detection
An Analysis of Pre-Training on Object Detection
Hengduo Li
Bharat Singh
Mahyar Najibi
Zuxuan Wu
L. Davis
ObjD
60
39
0
11 Apr 2019
Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure
Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure
Been Kim
Emily Reif
Martin Wattenberg
Samy Bengio
Michael C. Mozer
87
31
0
04 Mar 2019
The meaning of "most" for visual question answering models
The meaning of "most" for visual question answering models
A. Kuhnle
Ann A. Copestake
38
4
0
31 Dec 2018
Learning Not to Learn: Training Deep Neural Networks with Biased Data
Learning Not to Learn: Training Deep Neural Networks with Biased Data
Byungju Kim
Hyunwoo Kim
Kyungsu Kim
Sungjin Kim
Junmo Kim
OOD
74
411
0
26 Dec 2018
ImageNet-trained CNNs are biased towards texture; increasing shape bias
  improves accuracy and robustness
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos
Patricia Rubisch
Claudio Michaelis
Matthias Bethge
Felix Wichmann
Wieland Brendel
235
2,682
0
29 Nov 2018
Shape and Margin-Aware Lung Nodule Classification in Low-dose CT Images
  via Soft Activation Mapping
Shape and Margin-Aware Lung Nodule Classification in Low-dose CT Images via Soft Activation Mapping
Yiming Lei
Yukun Tian
Hongming Shan
Junping Zhang
Ge Wang
Mannudeep K. Kalra
53
79
0
30 Oct 2018
Deep Learning in Information Security
Deep Learning in Information Security
S. Thaler
Vlado Menkovski
M. Petković
63
10
0
12 Sep 2018
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Adam S. Charles
AAMLHAIAI4CE
105
9
0
01 Jun 2018
Potentials and Limitations of Deep Neural Networks for Cognitive Robots
Potentials and Limitations of Deep Neural Networks for Cognitive Robots
Doreen Jirak
S. Wermter
41
5
0
02 May 2018
Probing Physics Knowledge Using Tools from Developmental Psychology
Probing Physics Knowledge Using Tools from Developmental Psychology
Luis S. Piloto
Ari Weinstein
TB Dhruva
Arun Ahuja
M. Berk Mirza
Greg Wayne
David Amos
Chia-Chun Hung
M. Botvinick
78
34
0
03 Apr 2018
Assessing Shape Bias Property of Convolutional Neural Networks
Assessing Shape Bias Property of Convolutional Neural Networks
Hossein Hosseini
Baicen Xiao
Mayoore S. Jaiswal
Radha Poovendran
68
36
0
21 Mar 2018
Semantic Adversarial Examples
Semantic Adversarial Examples
Hossein Hosseini
Radha Poovendran
GANAAML
108
199
0
16 Mar 2018
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
Nathan Hilliard
Lawrence Phillips
Scott Howland
A. Yankov
Court D. Corley
Nathan Oken Hodas
SSL
85
159
0
12 Feb 2018
Evaluating Compositionality in Sentence Embeddings
Evaluating Compositionality in Sentence Embeddings
Ishita Dasgupta
Demi Guo
Andreas Stuhlmuller
S. Gershman
Noah D. Goodman
CoGe
98
121
0
12 Feb 2018
Cognitive Deficit of Deep Learning in Numerosity
Cognitive Deficit of Deep Learning in Numerosity
Xiaolin Wu
Xi Zhang
X. Shu
63
11
0
09 Feb 2018
12
Next