ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06808
  4. Cited By

Privacy Auditing of Large Language Models

9 March 2025
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
    PILM
ArXivPDFHTML

Papers citing "Privacy Auditing of Large Language Models"

2 / 2 papers shown
Title
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Hao Du
Shang Liu
Yang Cao
AAML
45
0
0
28 Apr 2025
Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents
Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents
C. L. P. Chen
Zhiping Zhang
Ibrahim Khalilov
Bingcan Guo
Simret Araya Gebreegziabher
Yanfang Ye
Ziang Xiao
Yaxing Yao
Tianshi Li
T. Li
LLMAG
ELM
87
0
0
24 Apr 2025
1