ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20228
68
0

TeleLoRA: Teleporting Model-Specific Alignment Across LLMs

26 March 2025
Xiao Lin
Manoj Acharya
Anirban Roy
Susmit Jha
    MoMe
ArXivPDFHTML
Abstract

Mitigating Trojans in Large Language Models (LLMs) is one of many tasks where alignment data is LLM specific, as different LLMs have different Trojan triggers and trigger behaviors to be removed. In this paper, we introduce TeleLoRA (Teleporting Low-Rank Adaptation), a novel framework that synergizes model-specific alignment data across multiple LLMs to enable zero-shot Trojan mitigation on unseen LLMs without alignment data. TeleLoRA learns a unified generator of LoRA adapter weights by leveraging local activation information across multiple LLMs. This generator is designed to be permutation symmetric to generalize across models with different architectures and sizes. We optimize the model design for memory efficiency, making it feasible to learn with large-scale LLMs with minimal computational resources. Experiments on LLM Trojan mitigation benchmarks demonstrate that TeleLoRA effectively reduces attack success rates while preserving the benign performance of the models.

View on arXiv
@article{lin2025_2503.20228,
  title={ TeleLoRA: Teleporting Model-Specific Alignment Across LLMs },
  author={ Xiao Lin and Manoj Acharya and Anirban Roy and Susmit Jha },
  journal={arXiv preprint arXiv:2503.20228},
  year={ 2025 }
}
Comments on this paper