ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19504
  4. Cited By
DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
v1v2 (latest)

DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation

26 May 2025
Pingzhi Li
Zhen Tan
Huaizhi Qu
Huan Liu
Tianlong Chen
Tianlong Chen
    AAML
ArXiv (abs)PDFHTMLGithub (3★)

Papers citing "DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation"

4 / 4 papers shown
Title
Leave It to the Experts: Detecting Knowledge Distillation via MoE Expert Signatures
Leave It to the Experts: Detecting Knowledge Distillation via MoE Expert Signatures
Pingzhi Li
Morris Yu-Chao Huang
Zhen Tan
Qingquan Song
Jie Peng
Kai Zou
Yu Cheng
Kaidi Xu
Tianlong Chen
MoEAAML
233
0
0
19 Oct 2025
Information-Preserving Reformulation of Reasoning Traces for Antidistillation
Information-Preserving Reformulation of Reasoning Traces for Antidistillation
Jiayu Ding
Lei Cui
Li Dong
Nanning Zheng
Furu Wei
LRM
120
0
0
13 Oct 2025
Unified attacks to large language model watermarks: spoofing and scrubbing in unauthorized knowledge distillation
Unified attacks to large language model watermarks: spoofing and scrubbing in unauthorized knowledge distillationKnowledge-Based Systems (KBS), 2025
Xin Yi
Shunfan Zhengc
Linlin Wanga
Xiaoling Wang
Xiaoling Wang
Liang He
AAML
1.1K
3
0
24 Apr 2025
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language ModelsAAAI Conference on Artificial Intelligence (AAAI), 2024
Xiao Cui
Mo Zhu
Yulei Qin
Liang Xie
Wengang Zhou
Haoyang Li
411
21
0
19 Dec 2024
1