ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.11711
10
11

CLAWSAT: Towards Both Robust and Accurate Code Models

21 November 2022
Jinghan Jia
Shashank Srikant
Tamara Mitrovska
Chuang Gan
Shiyu Chang
Sijia Liu
Una-May O’Reilly
    AAML
ArXivPDFHTML
Abstract

We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models. Different from existing works, we show that code obfuscation, a standard code transformation operation, provides novel means to generate complementary `views' of a code that enable us to achieve both robust and accurate code models. To the best of our knowledge, this is the first systematic study to explore and exploit the robustness and accuracy benefits of (multi-view) code obfuscations in code models. Specifically, we first adopt adversarial codes as robustness-promoting views in CL at the self-supervised pre-training phase. This yields improved robustness and transferability for downstream tasks. Next, at the supervised fine-tuning stage, we show that adversarial training with a proper temporally-staggered schedule of adversarial code generation can further improve robustness and accuracy of the pre-trained code model. Built on the above two modules, we develop CLAWSAT, a novel self-supervised learning (SSL) framework for code by integrating CL‾\underline{\textrm{CL}}CL​ with a‾\underline{\textrm{a}}a​dversarial view‾\underline{\textrm{w}}w​s (CLAW) with s‾\underline{\textrm{s}}s​taggered a‾\underline{\textrm{a}}a​dversarial t‾\underline{\textrm{t}}t​raining (SAT). On evaluating three downstream tasks across Python and Java, we show that CLAWSAT consistently yields the best robustness and accuracy (e.g.\textit{e.g.}e.g. 11%\%% in robustness and 6%\%% in accuracy on the code summarization task in Python). We additionally demonstrate the effectiveness of adversarial learning in CLAW by analyzing the characteristics of the loss landscape and interpretability of the pre-trained models.

View on arXiv
Comments on this paper