ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.09931
  4. Cited By
Smooth activations and reproducibility in deep networks

Smooth activations and reproducibility in deep networks

20 October 2020
G. Shamir
Dong Lin
Lorenzo Coviello
ArXivPDFHTML

Papers citing "Smooth activations and reproducibility in deep networks"

10 / 10 papers shown
Title
Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions
Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions
Siqiao Mu
Diego Klabjan
MU
50
3
0
15 Sep 2024
Deterministic Nonsmooth Nonconvex Optimization
Deterministic Nonsmooth Nonconvex Optimization
Michael I. Jordan
Guy Kornowski
Tianyi Lin
Ohad Shamir
Manolis Zampetakis
49
24
0
16 Feb 2023
Instability in clinical risk stratification models using deep learning
Instability in clinical risk stratification models using deep learning
D. Martinez
A. Yakubovich
Martin G. Seneviratne
Á. Lelkes
Akshit Tyagi
...
N. L. Downing
Ron C. Li
Keith Morse
N. Shah
Ming-Jun Chen
OOD
27
2
0
20 Nov 2022
On the Factory Floor: ML Engineering for Industrial-Scale Ads
  Recommendation Models
On the Factory Floor: ML Engineering for Industrial-Scale Ads Recommendation Models
Rohan Anil
S. Gadanho
Danya Huang
Nijith Jacob
Zhuoshu Li
...
Cristina Pop
Kevin Regan
G. Shamir
Rakesh Shivanna
Qiqi Yan
3DV
16
41
0
12 Sep 2022
Reducing Model Jitter: Stable Re-training of Semantic Parsers in
  Production Environments
Reducing Model Jitter: Stable Re-training of Semantic Parsers in Production Environments
Christopher Hidey
Fei Liu
Rahul Goel
19
4
0
10 Apr 2022
Randomness In Neural Network Training: Characterizing The Impact of
  Tooling
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang
Xingyao Zhang
S. Song
Sara Hooker
25
75
0
22 Jun 2021
Anti-Distillation: Improving reproducibility of deep networks
Anti-Distillation: Improving reproducibility of deep networks
G. Shamir
Lorenzo Coviello
34
20
0
19 Oct 2020
TanhExp: A Smooth Activation Function with High Convergence Speed for
  Lightweight Neural Networks
TanhExp: A Smooth Activation Function with High Convergence Speed for Lightweight Neural Networks
Xinyu Liu
Xiaoguang Di
19
59
0
22 Mar 2020
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
272
404
0
09 Apr 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
1