ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.11126
17
0

Bayesian Neural Networks: A Min-Max Game Framework

18 November 2023
Junping Hong
E. Kuruoglu
ArXivPDFHTML
Abstract

This paper is a preliminary study of the robustness and noise analysis of deep neural networks via a game theory formulation Bayesian Neural Networks (BNN) and the maximal coding rate distortion loss. BNN has been shown to provide some robustness to deep learning, and the minimax method used to be a natural conservative way to assist the Bayesian method. Inspired by the recent closed-loop transcription neural network, we formulate the BNN via game theory between the deterministic neural network fff and the sampling network f+ξf + \xif+ξ or f+r∗ξf + r*\xif+r∗ξ. Compared with previous BNN, BNN via game theory learns a solution space within a certain gap between the center fff and the sampling point f+r∗ξf + r*\xif+r∗ξ, and is a conservative choice with a meaningful prior setting compared with previous BNN. Furthermore, the minimum points between fff and f+r∗ξf + r*\xif+r∗ξ become stable when the subspace dimension is large enough with a well-trained model fff. With these, the model fff can have a high chance of recognizing the out-of-distribution data or noise data in the subspace rather than the prediction level, even if fff is in online training after a few iterations of true data. So far, our experiments are limited to MNIST and Fashion MNIST data sets, more experiments with realistic data sets and complicated neural network models should be implemented to validate the above arguments.

View on arXiv
Comments on this paper