ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.02580
  4. Cited By
How far can we go without convolution: Improving fully-connected
  networks

How far can we go without convolution: Improving fully-connected networks

9 November 2015
Zhouhan Lin
Roland Memisevic
K. Konda
ArXivPDFHTML

Papers citing "How far can we go without convolution: Improving fully-connected networks"

16 / 16 papers shown
Title
Artificial Intelligence and Deep Learning Algorithms for Epigenetic Sequence Analysis: A Review for Epigeneticists and AI Experts
Artificial Intelligence and Deep Learning Algorithms for Epigenetic Sequence Analysis: A Review for Epigeneticists and AI Experts
Muhammad Tahir
Mahboobeh Norouzi
Shehroz S. Khan
James Davie
Soichiro Yamanaka
A. Ashraf
35
1
0
01 Apr 2025
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the
  Hard-Label Setting
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
Nicholas Carlini
J. Chávez-Saab
Anna Hambitzer
Francisco Rodríguez-Henríquez
Adi Shamir
AAML
30
1
0
08 Oct 2024
Towards Exact Computation of Inductive Bias
Towards Exact Computation of Inductive Bias
Akhilan Boopathy
William Yue
Jaedong Hwang
Abhiram Iyer
Ila Fiete
44
0
0
22 Jun 2024
Emergent representations in networks trained with the Forward-Forward algorithm
Emergent representations in networks trained with the Forward-Forward algorithm
Niccolo Tosato
Lorenzo Basile
Emanuele Ballarin
Giuseppe de Alteriis
Alberto Cazzaniga
A. Ansuini
26
9
0
26 May 2023
Model-agnostic Measure of Generalization Difficulty
Model-agnostic Measure of Generalization Difficulty
Akhilan Boopathy
Kevin Liu
Jaedong Hwang
Shu Ge
Asaad Mohammedsaleh
Ila Fiete
80
4
0
01 May 2023
Deep Neural Networks as Complex Networks
Deep Neural Networks as Complex Networks
Emanuele La Malfa
G. Malfa
Claudio Caprioli
Giuseppe Nicosia
Vito Latora
PINN
GNN
48
4
0
12 Sep 2022
SPINE: Soft Piecewise Interpretable Neural Equations
SPINE: Soft Piecewise Interpretable Neural Equations
Jasdeep Singh Grover
Harsh Minesh Domadia
Rajashree Tapase
Grishma Sharma
21
0
0
20 Nov 2021
Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using
  off-the-Shelf ANNs
Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using off-the-Shelf ANNs
Cecilia Latotzke
Johnson Loh
T. Gemmeke
24
0
0
27 Oct 2021
ResMLP: Feedforward networks for image classification with
  data-efficient training
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron
Piotr Bojanowski
Mathilde Caron
Matthieu Cord
Alaaeldin El-Nouby
...
Gautier Izacard
Armand Joulin
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
VLM
36
656
0
07 May 2021
Computational Separation Between Convolutional and Fully-Connected
  Networks
Computational Separation Between Convolutional and Fully-Connected Networks
Eran Malach
Shai Shalev-Shwartz
24
26
0
03 Oct 2020
Variational Information Distillation for Knowledge Transfer
Variational Information Distillation for Knowledge Transfer
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
58
609
0
11 Apr 2019
Biologically plausible deep learning -- but how far can we go with
  shallow networks?
Biologically plausible deep learning -- but how far can we go with shallow networks?
Bernd Illing
W. Gerstner
Johanni Brea
24
94
0
27 Feb 2019
Classifying Signals on Irregular Domains via Convolutional Cluster
  Pooling
Classifying Signals on Irregular Domains via Convolutional Cluster Pooling
Angelo Porrello
Davide Abati
Simone Calderara
Rita Cucchiara
24
11
0
13 Feb 2019
Markov chain Hebbian learning algorithm with ternary synaptic units
Markov chain Hebbian learning algorithm with ternary synaptic units
Guhyun Kim
V. Kornijcuk
Dohun Kim
Inho Kim
Jaewook Kim
Hyo Cheon Woo
Jihun Kim
C. S. Hwang
D. Jeong
27
3
0
23 Nov 2017
The loss surface of deep and wide neural networks
The loss surface of deep and wide neural networks
Quynh N. Nguyen
Matthias Hein
ODL
51
283
0
26 Apr 2017
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
G. Urban
Krzysztof J. Geras
Samira Ebrahimi Kahou
Ozlem Aslan
Shengjie Wang
R. Caruana
Abdel-rahman Mohamed
Matthai Philipose
Matthew Richardson
20
47
0
17 Mar 2016
1