ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08947
  4. Cited By
SPLASH: Learnable Activation Functions for Improving Accuracy and
  Adversarial Robustness

SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness

16 June 2020
Mohammadamin Tavakoli
Forest Agostinelli
Pierre Baldi
    AAML
    FAtt
ArXivPDFHTML

Papers citing "SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness"

5 / 5 papers shown
Title
Learnable polynomial, trigonometric, and tropical activations
Learnable polynomial, trigonometric, and tropical activations
Ismail Khalfaoui-Hassani
Stefan Kesselheim
61
0
0
03 Feb 2025
Efficient Activation Function Optimization through Surrogate Modeling
Efficient Activation Function Optimization through Surrogate Modeling
G. Bingham
Risto Miikkulainen
16
2
0
13 Jan 2023
Neural Networks with A La Carte Selection of Activation Functions
Neural Networks with A La Carte Selection of Activation Functions
Moshe Sipper
9
7
0
24 Jun 2022
Parameterizing Activation Functions for Adversarial Robustness
Parameterizing Activation Functions for Adversarial Robustness
Sihui Dai
Saeed Mahloujifar
Prateek Mittal
AAML
42
32
0
11 Oct 2021
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1