ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19466
16
0

Origin Tracer: A Method for Detecting LoRA Fine-Tuning Origins in LLMs

26 May 2025
Hongyu Liang
Yuting Zheng
Yihan Li
Yiran Zhang
Shiyu Liang
ArXivPDFHTML
Abstract

As large language models (LLMs) continue to advance, their deployment often involves fine-tuning to enhance performance on specific downstream tasks. However, this customization is sometimes accompanied by misleading claims about the origins, raising significant concerns about transparency and trust within the open-source community. Existing model verification techniques typically assess functional, representational, and weight similarities. However, these approaches often struggle against obfuscation techniques, such as permutations and scaling transformations. To address this limitation, we propose a novel detection method Origin-Tracer that rigorously determines whether a model has been fine-tuned from a specified base model. This method includes the ability to extract the LoRA rank utilized during the fine-tuning process, providing a more robust verification framework. This framework is the first to provide a formalized approach specifically aimed at pinpointing the sources of model fine-tuning. We empirically validated our method on thirty-one diverse open-source models under conditions that simulate real-world obfuscation scenarios. We empirically analyze the effectiveness of our framework and finally, discuss its limitations. The results demonstrate the effectiveness of our approach and indicate its potential to establish new benchmarks for model verification.

View on arXiv
@article{liang2025_2505.19466,
  title={ Origin Tracer: A Method for Detecting LoRA Fine-Tuning Origins in LLMs },
  author={ Hongyu Liang and Yuting Zheng and Yihan Li and Yiran Zhang and Shiyu Liang },
  journal={arXiv preprint arXiv:2505.19466},
  year={ 2025 }
}
Comments on this paper