ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.07741
  4. Cited By
Activation function design for deep networks: linearity and effective
  initialisation

Activation function design for deep networks: linearity and effective initialisation

17 May 2021
Michael Murray
V. Abrol
Jared Tanner
    ODL
    LLMSV
ArXivPDFHTML

Papers citing "Activation function design for deep networks: linearity and effective initialisation"

6 / 6 papers shown
Title
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Thiziri Nait Saada
Jared Tanner
13
1
0
31 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
16
0
0
17 Jan 2023
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
30
11
0
15 Nov 2022
Wide and Deep Neural Networks Achieve Optimality for Classification
Wide and Deep Neural Networks Achieve Optimality for Classification
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
16
18
0
29 Apr 2022
Coordinate descent on the orthogonal group for recurrent neural network
  training
Coordinate descent on the orthogonal group for recurrent neural network training
E. Massart
V. Abrol
29
10
0
30 Jul 2021
TanhExp: A Smooth Activation Function with High Convergence Speed for
  Lightweight Neural Networks
TanhExp: A Smooth Activation Function with High Convergence Speed for Lightweight Neural Networks
Xinyu Liu
Xiaoguang Di
19
59
0
22 Mar 2020
1