ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.08345
  4. Cited By
Extracting Targeted Training Data from ASR Models, and How to Mitigate
  It
v1v2 (latest)

Extracting Targeted Training Data from ASR Models, and How to Mitigate It

Interspeech (Interspeech), 2022
18 April 2022
Ehsan Amid
Om Thakkar
A. Narayanan
Rajiv Mathews
Franccoise Beaufays
ArXiv (abs)PDFHTML

Papers citing "Extracting Targeted Training Data from ASR Models, and How to Mitigate It"

8 / 8 papers shown
Beyond Text: Unveiling Privacy Vulnerabilities in Multi-modal Retrieval-Augmented Generation
Beyond Text: Unveiling Privacy Vulnerabilities in Multi-modal Retrieval-Augmented Generation
Jiankun Zhang
Shenglai Zeng
Jie Ren
Tianqi Zheng
Hui Liu
Xianfeng Tang
Hui Liu
Yi Chang
231
1
0
20 May 2025
Differentially Private Parameter-Efficient Fine-tuning for Large ASR
  Models
Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models
Hongbin Liu
Lun Wang
Om Thakkar
Abhradeep Thakurta
Arun Narayanan
251
2
0
02 Oct 2024
Training Large ASR Encoders with Differential Privacy
Training Large ASR Encoders with Differential PrivacySpoken Language Technology Workshop (SLT), 2024
Geeticka Chauhan
Steve Chien
Om Thakkar
Abhradeep Thakurta
Arun Narayanan
258
2
0
21 Sep 2024
Noise Masking Attacks and Defenses for Pretrained Speech Models
Noise Masking Attacks and Defenses for Pretrained Speech ModelsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Matthew Jagielski
Om Thakkar
Lun Wang
AAML
212
7
0
02 Apr 2024
Unintended Memorization in Large ASR Models, and How to Mitigate It
Unintended Memorization in Large ASR Models, and How to Mitigate ItIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Lun Wang
Om Thakkar
Rajiv Mathews
237
8
0
18 Oct 2023
A Review of Speech-centric Trustworthy Machine Learning: Privacy,
  Safety, and Fairness
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and FairnessAPSIPA Transactions on Signal and Information Processing (TASIP), 2022
Tiantian Feng
Rajat Hebbar
Nicholas Mehlman
Xuan Shi
Aditya Kommineni
and Shrikanth Narayanan
260
37
0
18 Dec 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training ExamplesInternational Conference on Learning Representations (ICLR), 2022
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
364
132
0
30 Jun 2022
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence EstimationNeural Information Processing Systems (NeurIPS), 2020
Vitaly Feldman
Chiyuan Zhang
TDI
529
562
0
09 Aug 2020
1