ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.07066
34
4

Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models

11 March 2024
Philip Harris
Michael Kagan
J. Krupa
B. Maier
Nathaniel Woodward
ArXivPDFHTML
Abstract

Self-Supervised Learning (SSL) is at the core of training modern large machine learning models, providing a scheme for learning powerful representations that can be used in a variety of downstream tasks. However, SSL strategies must be adapted to the type of training data and downstream tasks required. We propose RS3L ("Re-simulation-based self-supervised representation learning"), a novel simulation-based SSL strategy that employs a method of re-simulation to drive data augmentation for contrastive learning in the physical sciences, particularly, in fields that rely on stochastic simulators. By intervening in the middle of the simulation process and re-running simulation components downstream of the intervention, we generate multiple realizations of an event, thus producing a set of augmentations covering all physics-driven variations available in the simulator. Using experiments from high-energy physics, we explore how this strategy may enable the development of a foundation model; we show how RS3L pre-training enables powerful performance in downstream tasks such as discrimination of a variety of objects and uncertainty mitigation. In addition to our results, we make the RS3L dataset publicly available for further studies on how to improve SSL strategies.

View on arXiv
@article{harris2025_2403.07066,
  title={ Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models },
  author={ Philip Harris and Michael Kagan and Jeffrey Krupa and Benedikt Maier and Nathaniel Woodward },
  journal={arXiv preprint arXiv:2403.07066},
  year={ 2025 }
}
Comments on this paper