ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.06732
  4. Cited By
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks

Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks

15 July 2019
Alejandro Molina
P. Schramowski
Kristian Kersting
    ODL
ArXivPDFHTML

Papers citing "Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks"

37 / 37 papers shown
Title
KAN or MLP? Point Cloud Shows the Way Forward
KAN or MLP? Point Cloud Shows the Way Forward
Yan Shi
Qingdong He
Yijun Liu
Xiaoyu Liu
Jingyong Su
3DPC
35
0
0
18 Apr 2025
Learnable polynomial, trigonometric, and tropical activations
Learnable polynomial, trigonometric, and tropical activations
Ismail Khalfaoui-Hassani
Stefan Kesselheim
61
0
0
03 Feb 2025
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language
  Models Fine-tuning
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
MoE
37
12
0
23 Oct 2024
Taming the Tail: Leveraging Asymmetric Loss and Pade Approximation to
  Overcome Medical Image Long-Tailed Class Imbalance
Taming the Tail: Leveraging Asymmetric Loss and Pade Approximation to Overcome Medical Image Long-Tailed Class Imbalance
Pankhi Kashyap
Pavni Tandon
Sunny Gupta
Abhishek Tiwari
Ritwik Kulkarni
Kshitij Sharad Jadhav
34
1
0
05 Oct 2024
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent
  Representation MOdification
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification
Tianfang Xie
Tianjing Li
Wei Zhu
Wei Han
Yi Zhao
34
5
0
26 Sep 2024
Kolmogorov-Arnold Transformer
Kolmogorov-Arnold Transformer
Xingyi Yang
Xinchao Wang
44
15
0
16 Sep 2024
Efficient Search for Customized Activation Functions with Gradient
  Descent
Efficient Search for Customized Activation Functions with Gradient Descent
Lukas Strack
Mahmoud Safari
Frank Hutter
53
1
0
13 Aug 2024
Activations Through Extensions: A Framework To Boost Performance Of
  Neural Networks
Activations Through Extensions: A Framework To Boost Performance Of Neural Networks
Chandramouli Kamanchi
Sumanta Mukherjee
K. Sampath
Pankaj Dayama
Arindam Jati
Vijay Ekambaram
Dzung Phan
25
0
0
07 Aug 2024
IAPT: Instruction-Aware Prompt Tuning for Large Language Models
IAPT: Instruction-Aware Prompt Tuning for Large Language Models
Wei-wei Zhu
Aaron Xuxiang Tian
Congrui Yin
Yuan Ni
Xiaoling Wang
Guotong Xie
45
0
0
28 May 2024
PAON: A New Neuron Model using Padé Approximants
PAON: A New Neuron Model using Padé Approximants
Onur Keleş
A. Murat Tekalp
38
1
0
18 Mar 2024
Covering Number of Real Algebraic Varieties and Beyond: Improved Bounds
  and Applications
Covering Number of Real Algebraic Varieties and Beyond: Improved Bounds and Applications
Yifan Zhang
Joe Kileel
39
4
0
09 Nov 2023
A Non-monotonic Smooth Activation Function
A Non-monotonic Smooth Activation Function
Koushik Biswas
Meghana Karri
Ulacs Baugci
11
1
0
16 Oct 2023
A Machine Learning-oriented Survey on Tiny Machine Learning
A Machine Learning-oriented Survey on Tiny Machine Learning
Luigi Capogrosso
Federico Cunico
D. Cheng
Franco Fummi
Marco Cristani
SyDa
MU
29
33
0
21 Sep 2023
Improved Auto-Encoding using Deterministic Projected Belief Networks
Improved Auto-Encoding using Deterministic Projected Belief Networks
P. Baggenstoss
25
2
0
14 Sep 2023
Learning Specialized Activation Functions for Physics-informed Neural
  Networks
Learning Specialized Activation Functions for Physics-informed Neural Networks
Honghui Wang
Lu Lu
Shiji Song
Gao Huang
PINN
AI4CE
16
11
0
08 Aug 2023
Rational Neural Network Controllers
Rational Neural Network Controllers
M. Newton
A. Papachristodoulou
OOD
AAML
37
1
0
12 Jul 2023
Self-Expanding Neural Networks
Self-Expanding Neural Networks
Rupert Mitchell
Robin Menzenbach
Kristian Kersting
Martin Mundt
29
7
0
10 Jul 2023
Class-Incremental Exemplar Compression for Class-Incremental Learning
Class-Incremental Exemplar Compression for Class-Incremental Learning
Zilin Luo
Yaoyao Liu
Bernt Schiele
Qianru Sun
VLM
CLL
85
44
0
24 Mar 2023
A comparison of rational and neural network based approximations
A comparison of rational and neural network based approximations
V. Peiris
R. D. Millán
N. Sukhorukova
J. Ugon
17
0
0
08 Mar 2023
Convolutional Neural Operators for robust and accurate learning of PDEs
Convolutional Neural Operators for robust and accurate learning of PDEs
Bogdan Raonić
Roberto Molinaro
Tim De Ryck
Tobias Rohner
Francesca Bartolucci
Rima Alaifari
Siddhartha Mishra
Emmanuel de Bezenac
AAML
31
84
0
02 Feb 2023
Efficient Activation Function Optimization through Surrogate Modeling
Efficient Activation Function Optimization through Surrogate Modeling
G. Bingham
Risto Miikkulainen
16
2
0
13 Jan 2023
Deepening Neural Networks Implicitly and Locally via Recurrent Attention
  Strategy
Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy
Shan Zhong
Wushao Wen
Jinghui Qin
Zhongzhan Huang
20
0
0
27 Oct 2022
Transformers with Learnable Activation Functions
Transformers with Learnable Activation Functions
Haishuo Fang
Ji-Ung Lee
N. Moosavi
Iryna Gurevych
AI4CE
25
7
0
30 Aug 2022
Deep Learning and Symbolic Regression for Discovering Parametric
  Equations
Deep Learning and Symbolic Regression for Discovering Parametric Equations
Michael Zhang
Samuel Kim
Peter Y. Lu
M. Soljavcić
24
18
0
01 Jul 2022
Adaptable Adapters
Adaptable Adapters
N. Moosavi
Quentin Delfosse
Kristian Kersting
Iryna Gurevych
48
21
0
03 May 2022
SMU: smooth activation function for deep networks using smoothing
  maximum technique
SMU: smooth activation function for deep networks using smoothing maximum technique
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
28
32
0
08 Nov 2021
OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D
  Medical Data
OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data
Christoph Reich
Tim Prangemeier
Ozdemir Cetin
Heinz Koeppl
24
10
0
20 Oct 2021
Activation Functions in Deep Learning: A Comprehensive Survey and
  Benchmark
Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark
S. Dubey
S. Singh
B. B. Chaudhuri
41
641
0
29 Sep 2021
SAU: Smooth activation function using convolution with approximate
  identities
SAU: Smooth activation function using convolution with approximate identities
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
19
6
0
27 Sep 2021
ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
46
13
0
09 Sep 2021
Effect of the output activation function on the probabilities and errors
  in medical image segmentation
Effect of the output activation function on the probabilities and errors in medical image segmentation
Lars Nieradzik
G. Scheuermann
D. Saur
Christina Gillmann
SSeg
MedIm
UQCV
35
6
0
02 Sep 2021
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations
  in Privacy-Preserving Deep Learning
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
Karthik Garimella
N. Jha
Brandon Reagen
19
19
0
26 Jul 2021
Legendre Deep Neural Network (LDNN) and its application for
  approximation of nonlinear Volterra Fredholm Hammerstein integral equations
Legendre Deep Neural Network (LDNN) and its application for approximation of nonlinear Volterra Fredholm Hammerstein integral equations
Z. Hajimohammadi
Kourosh Parand
A. Ghodsi
33
4
0
27 Jun 2021
Orthogonal-Padé Activation Functions: Trainable Activation functions
  for smooth and faster convergence in deep networks
Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
Koushik Biswas
Shilpak Banerjee
A. Pandey
ODL
16
5
0
17 Jun 2021
Learning specialized activation functions with the Piecewise Linear Unit
Learning specialized activation functions with the Piecewise Linear Unit
Yucong Zhou
Zezhou Zhu
Zhaobai Zhong
27
14
0
08 Apr 2021
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Quentin Delfosse
P. Schramowski
Martin Mundt
Alejandro Molina
Kristian Kersting
37
14
0
18 Feb 2021
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
29
324
0
21 May 2019
1