ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.03349
  4. Cited By
On the Reproducibility of Neural Network Predictions

On the Reproducibility of Neural Network Predictions

5 February 2021
Srinadh Bhojanapalli
Kimberly Wilber
Andreas Veit
A. S. Rawat
Seungyeon Kim
A. Menon
Sanjiv Kumar
ArXiv (abs)PDFHTML

Papers citing "On the Reproducibility of Neural Network Predictions"

18 / 18 papers shown
Title
Improving Chain-of-Thought Efficiency for Autoregressive Image Generation
Improving Chain-of-Thought Efficiency for Autoregressive Image Generation
Zeqi Gu
Markos Georgopoulos
Xiaoliang Dai
Marjan Ghazvininejad
Chu Wang
...
Zecheng He
Zijian He
Jiawei Zhou
Abe Davis
Jialiang Wang
LRM
132
0
0
07 Oct 2025
Measuring and Mitigating Local Instability in Deep Neural Networks
Measuring and Mitigating Local Instability in Deep Neural NetworksAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Arghya Datta
Subhrangshu Nandi
Jingcheng Xu
Greg Ver Steeg
He Xie
Anoop Kumar
Aram Galstyan
210
3
0
18 May 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Similarity of Neural Network Models: A Survey of Functional and Representational MeasuresACM Computing Surveys (ACM Comput. Surv.), 2023
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
501
104
0
10 May 2023
Maintaining Stability and Plasticity for Predictive Churn Reduction
Maintaining Stability and Plasticity for Predictive Churn Reduction
George Adam
B. Haibe-Kains
Anna Goldenberg
193
1
0
06 May 2023
Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks
  with Soft-Thresholding
Convex Dual Theory Analysis of Two-Layer Convolutional Neural Networks with Soft-ThresholdingIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Chunyan Xiong
Meng Lu
Xiaotong Yu
JIAN-PENG Cao
Zhong Chen
D. Guo
X. Qu
MLT
302
1
0
14 Apr 2023
On the Variance of Neural Network Training with respect to Test Sets and
  Distributions
On the Variance of Neural Network Training with respect to Test Sets and DistributionsInternational Conference on Learning Representations (ICLR), 2023
Keller Jordan
OOD
344
19
0
04 Apr 2023
Measuring the Instability of Fine-Tuning
Measuring the Instability of Fine-TuningAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yupei Du
D. Nguyen
233
6
0
15 Feb 2023
On the Factory Floor: ML Engineering for Industrial-Scale Ads
  Recommendation Models
On the Factory Floor: ML Engineering for Industrial-Scale Ads Recommendation Models
Rohan Anil
S. Gadanho
Danya Huang
Nijith Jacob
Zhuoshu Li
...
Cristina Pop
Kevin Regan
G. Shamir
Rakesh Shivanna
Qiqi Yan
3DV
224
47
0
12 Sep 2022
On the Prediction Instability of Graph Neural Networks
On the Prediction Instability of Graph Neural Networks
Max Klabunde
Florian Lemmerich
141
6
0
20 May 2022
Predicting on the Edge: Identifying Where a Larger Model Does Better
Predicting on the Edge: Identifying Where a Larger Model Does Better
Taman Narayan
Heinrich Jiang
Sen Zhao
Surinder Kumar
163
7
0
15 Feb 2022
Real World Large Scale Recommendation Systems Reproducibility and Smooth
  Activations
Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations
G. Shamir
Dong Lin
HAIOffRL
163
8
0
14 Feb 2022
Reproducibility in Optimization: Theoretical Framework and Limits
Reproducibility in Optimization: Theoretical Framework and LimitsNeural Information Processing Systems (NeurIPS), 2022
Kwangjun Ahn
Prateek Jain
Ziwei Ji
Satyen Kale
Praneeth Netrapalli
G. Shamir
233
25
0
09 Feb 2022
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone DecompositionsInternational Conference on Machine Learning (ICML), 2022
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
466
32
0
02 Feb 2022
Neural Network Weights Do Not Converge to Stationary Points: An
  Invariant Measure Perspective
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure PerspectiveInternational Conference on Machine Learning (ICML), 2021
J.N. Zhang
Haochuan Li
S. Sra
Ali Jadbabaie
206
13
0
12 Oct 2021
Use of speaker recognition approaches for learning and evaluating
  embedding representations of musical instrument sounds
Use of speaker recognition approaches for learning and evaluating embedding representations of musical instrument soundsIEEE/ACM Transactions on Audio Speech and Language Processing (TASLP), 2021
Xuan Shi
Erica Cooper
Junichi Yamagishi
161
9
0
24 Jul 2021
Assessing Generalization of SGD via Disagreement
Assessing Generalization of SGD via DisagreementInternational Conference on Learning Representations (ICLR), 2021
Yiding Jiang
Vaishnavh Nagarajan
Christina Baek
J. Zico Kolter
300
131
0
25 Jun 2021
Churn Reduction via Distillation
Churn Reduction via DistillationInternational Conference on Learning Representations (ICLR), 2021
Heinrich Jiang
Harikrishna Narasimhan
Dara Bahri
Andrew Cotter
Afshin Rostamizadeh
225
18
0
04 Jun 2021
Synthesizing Irreproducibility in Deep Networks
Synthesizing Irreproducibility in Deep Networks
R. Snapp
G. Shamir
OOD
166
11
0
21 Feb 2021
1