ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.08985
  4. Cited By
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed,
  Gated, and Tree

Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree

30 September 2015
Chen-Yu Lee
Patrick W. Gallagher
Zhuowen Tu
    AI4CE
ArXivPDFHTML

Papers citing "Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree"

20 / 70 papers shown
Title
Large-Margin Softmax Loss for Convolutional Neural Networks
Large-Margin Softmax Loss for Convolutional Neural Networks
Weiyang Liu
Yandong Wen
Zhiding Yu
Meng Yang
CVBM
36
1,451
0
07 Dec 2016
Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using
  Householder Reflections
Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections
Zakaria Mhammedi
Andrew D. Hellicar
Ashfaqur Rahman
James Bailey
27
129
0
01 Dec 2016
CIFAR-10: KNN-based Ensemble of Classifiers
CIFAR-10: KNN-based Ensemble of Classifiers
Yehya Abouelnaga
Ola S. Ali
Hager Rady
Mohamed Moustafa
FedML
26
65
0
15 Nov 2016
Sparsely-Connected Neural Networks: Towards Efficient VLSI
  Implementation of Deep Neural Networks
Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks
A. Ardakani
C. Condo
W. Gross
33
40
0
04 Nov 2016
Temporal Ensembling for Semi-Supervised Learning
Temporal Ensembling for Semi-Supervised Learning
S. Laine
Timo Aila
UQCV
90
2,524
0
07 Oct 2016
Quantized Neural Networks: Training Neural Networks with Low Precision
  Weights and Activations
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara
Matthieu Courbariaux
Daniel Soudry
Ran El-Yaniv
Yoshua Bengio
MQ
54
1,846
0
22 Sep 2016
Lets keep it simple, Using simple architectures to outperform deeper and
  more complex architectures
Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures
S. H. HasanPour
Mohammad Rouhani
Mohsen Fayyaz
Mohammad Sabokrou
23
119
0
22 Aug 2016
YodaNN: An Architecture for Ultra-Low Power Binary-Weight CNN
  Acceleration
YodaNN: An Architecture for Ultra-Low Power Binary-Weight CNN Acceleration
Renzo Andri
Lukas Cavigelli
D. Rossi
Luca Benini
23
196
0
17 Jun 2016
Convolutional Residual Memory Networks
Convolutional Residual Memory Networks
Joel Ruben Antony Moniz
C. Pal
31
23
0
16 Jun 2016
Convolutional Neural Fabrics
Convolutional Neural Fabrics
Shreyas Saxena
Jakob Verbeek
24
225
0
08 Jun 2016
FractalNet: Ultra-Deep Neural Networks without Residuals
FractalNet: Ultra-Deep Neural Networks without Residuals
Gustav Larsson
Michael Maire
Gregory Shakhnarovich
45
933
0
24 May 2016
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
Julien Mairal
SSL
34
130
0
20 May 2016
DisturbLabel: Regularizing CNN on the Loss Layer
DisturbLabel: Regularizing CNN on the Loss Layer
Lingxi Xie
Jingdong Wang
Zhen Wei
Meng Wang
Qi Tian
41
250
0
30 Apr 2016
Deep Residual Networks with Exponential Linear Unit
Deep Residual Networks with Exponential Linear Unit
Anish Shah
Eashan Kadam
Hena Shah
Sameer Shinde
Sandip Shingade
50
120
0
14 Apr 2016
Multi-Bias Non-linear Activation in Deep Neural Networks
Multi-Bias Non-linear Activation in Deep Neural Networks
Hongyang Li
Wanli Ouyang
Xiaogang Wang
15
64
0
03 Apr 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
74
2,338
0
30 Mar 2016
Convolution in Convolution for Network in Network
Convolution in Convolution for Network in Network
Yanwei Pang
Manli Sun
Xiaoheng Jiang
Xuelong Li
34
167
0
22 Mar 2016
Understanding and Improving Convolutional Neural Networks via
  Concatenated Rectified Linear Units
Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units
Wenling Shang
Kihyuk Sohn
Diogo Almeida
Honglak Lee
24
499
0
16 Mar 2016
Binarized Neural Networks
Itay Hubara
Daniel Soudry
Ran El-Yaniv
MQ
58
1,350
0
08 Feb 2016
Cost Sensitive Learning of Deep Feature Representations from Imbalanced
  Data
Cost Sensitive Learning of Deep Feature Representations from Imbalanced Data
Salman H. Khan
Munawar Hayat
Bennamoun
Ferdous Sohel
R. Togneri
23
874
0
14 Aug 2015
Previous
12