ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.11349
  4. Cited By
Confidence May Cheat: Self-Training on Graph Neural Networks under
  Distribution Shift

Confidence May Cheat: Self-Training on Graph Neural Networks under Distribution Shift

27 January 2022
Hongrui Liu
Binbin Hu
Xiao Wang
Chuan Shi
Zhiqiang Zhang
Jun Zhou
ArXivPDFHTML

Papers citing "Confidence May Cheat: Self-Training on Graph Neural Networks under Distribution Shift"

3 / 3 papers shown
Title
Be Confident! Towards Trustworthy Graph Neural Networks via Confidence
  Calibration
Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration
Xiao Wang
Hongrui Liu
Chuan Shi
Cheng Yang
UQCV
70
92
0
29 Sep 2021
Revisiting Self-Training for Neural Sequence Generation
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
206
252
0
30 Sep 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
230
8,157
0
06 Jun 2015
1