ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18202
31
1

DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches

25 February 2025
Atik Faysal
Mohammad Rostami
Taha Boushine
Reihaneh Gh. Roshan
Huaxia Wang
Nikhil Muralidhar
ArXivPDFHTML
Abstract

We introduce DenoMAE2.0, an enhanced denoising masked autoencoder that integrates a local patch classification objective alongside traditional reconstruction loss to improve representation learning and robustness. Unlike conventional Masked Autoencoders (MAE), which focus solely on reconstructing missing inputs, DenoMAE2.0 introduces position-aware classification of unmasked patches, enabling the model to capture fine-grained local features while maintaining global coherence. This dual-objective approach is particularly beneficial in semi-supervised learning for wireless communication, where high noise levels and data scarcity pose significant challenges. We conduct extensive experiments on modulation signal classification across a wide range of signal-to-noise ratios (SNRs), from extremely low to moderately high conditions and in a low data regime. Our results demonstrate that DenoMAE2.0 surpasses its predecessor, Deno-MAE, and other baselines in both denoising quality and downstream classification accuracy. DenoMAE2.0 achieves a 1.1% improvement over DenoMAE on our dataset and 11.83%, 16.55% significant improved accuracy gains on the RadioML benchmark, over DenoMAE, for constellation diagram classification of modulation signals.

View on arXiv
@article{faysal2025_2502.18202,
  title={ DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches },
  author={ Atik Faysal and Mohammad Rostami and Taha Boushine and Reihaneh Gh. Roshan and Huaxia Wang and Nikhil Muralidhar },
  journal={arXiv preprint arXiv:2502.18202},
  year={ 2025 }
}
Comments on this paper