ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1402.3337
  4. Cited By
Zero-bias autoencoders and the benefits of co-adapting features

Zero-bias autoencoders and the benefits of co-adapting features

13 February 2014
K. Konda
Roland Memisevic
David M. Krueger
    AI4CE
ArXivPDFHTML

Papers citing "Zero-bias autoencoders and the benefits of co-adapting features"

15 / 15 papers shown
Title
Residual Stream Analysis with Multi-Layer SAEs
Residual Stream Analysis with Multi-Layer SAEs
Tim Lawson
Lucy Farnik
Conor Houghton
Laurence Aitchison
31
3
0
06 Sep 2024
Bayesian optimization for sparse neural networks with trainable
  activation functions
Bayesian optimization for sparse neural networks with trainable activation functions
M. Fakhfakh
Lotfi Chaari
20
2
0
10 Apr 2023
A survey on modern trainable activation functions
A survey on modern trainable activation functions
Andrea Apicella
Francesco Donnarumma
Francesco Isgrò
R. Prevete
36
365
0
02 May 2020
Weighted Sigmoid Gate Unit for an Activation Function of Deep Neural
  Network
Weighted Sigmoid Gate Unit for an Activation Function of Deep Neural Network
Masayuki Tanaka
33
54
0
03 Oct 2018
Learning Combinations of Activation Functions
Learning Combinations of Activation Functions
Franco Manessi
A. Rozza
AI4CE
26
54
0
29 Jan 2018
Automated Pruning for Deep Neural Network Compression
Automated Pruning for Deep Neural Network Compression
Franco Manessi
A. Rozza
Simone Bianco
Paolo Napoletano
Raimondo Schettini
36
56
0
05 Dec 2017
EndNet: Sparse AutoEncoder Network for Endmember Extraction and
  Hyperspectral Unmixing
EndNet: Sparse AutoEncoder Network for Endmember Extraction and Hyperspectral Unmixing
Savas Ozkan
Berk Kaya
G. Akar
23
190
0
06 Aug 2017
Structured Prediction of 3D Human Pose with Deep Neural Networks
Structured Prediction of 3D Human Pose with Deep Neural Networks
Bugra Tekin
Isinsu Katircioglu
Mathieu Salzmann
Vincent Lepetit
Pascal Fua
3DH
18
272
0
17 May 2016
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
G. Urban
Krzysztof J. Geras
Samira Ebrahimi Kahou
Ozlem Aslan
Shengjie Wang
R. Caruana
Abdel-rahman Mohamed
Matthai Philipose
Matthew Richardson
20
47
0
17 Mar 2016
Do Deep Neural Networks Learn Facial Action Units When Doing Expression
  Recognition?
Do Deep Neural Networks Learn Facial Action Units When Doing Expression Recognition?
Pooya Khorrami
T. Paine
Thomas S. Huang
CVBM
21
267
0
10 Oct 2015
Dropout as data augmentation
Dropout as data augmentation
Xavier Bouthillier
K. Konda
Pascal Vincent
Roland Memisevic
28
133
0
29 Jun 2015
Why Regularized Auto-Encoders learn Sparse Representation?
Why Regularized Auto-Encoders learn Sparse Representation?
Devansh Arpit
Yingbo Zhou
H. Ngo
V. Govindaraju
45
68
0
21 May 2015
Self-Tuned Deep Super Resolution
Self-Tuned Deep Super Resolution
Zhangyang Wang
Yingzhen Yang
Zhaowen Wang
Shiyu Chang
Wei Han
Jianchao Yang
Thomas S. Huang
SupR
22
69
0
22 Apr 2015
Difference Target Propagation
Difference Target Propagation
Dong-Hyun Lee
Saizheng Zhang
Asja Fischer
Yoshua Bengio
AAML
25
346
0
23 Dec 2014
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1