ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.03189
  4. Cited By
I Am Not What I Write: Privacy Preserving Text Representation Learning

I Am Not What I Write: Privacy Preserving Text Representation Learning

6 July 2019
Ghazaleh Beigi
Kai Shu
Ruocheng Guo
Suhang Wang
Huan Liu
ArXiv (abs)PDFHTML

Papers citing "I Am Not What I Write: Privacy Preserving Text Representation Learning"

13 / 13 papers shown
Title
Investigating User Perspectives on Differentially Private Text Privatization
Stephen Meisenbacher
Alexandra Klymenko
Alexander Karpp
Florian Matthes
89
0
0
12 Mar 2025
1-Diffractor: Efficient and Utility-Preserving Text Obfuscation
  Leveraging Word-Level Metric Differential Privacy
1-Diffractor: Efficient and Utility-Preserving Text Obfuscation Leveraging Word-Level Metric Differential Privacy
Stephen Meisenbacher
Maulik Chevli
Florian Matthes
80
6
0
02 May 2024
A Neighbourhood-Aware Differential Privacy Mechanism for Static Word
  Embeddings
A Neighbourhood-Aware Differential Privacy Mechanism for Static Word Embeddings
Danushka Bollegala
Shuichi Otake
T. Machide
Ken-ichi Kawarabayashi
136
4
0
19 Sep 2023
Differentially Private Language Models for Secure Data Sharing
Differentially Private Language Models for Secure Data Sharing
Justus Mattern
Zhijing Jin
Benjamin Weggenmann
Bernhard Schoelkopf
Mrinmaya Sachan
SyDa
104
52
0
25 Oct 2022
How Much User Context Do We Need? Privacy by Design in Mental Health NLP
  Application
How Much User Context Do We Need? Privacy by Design in Mental Health NLP Application
Ramit Sawhney
A. Neerkaje
Ivan Habernal
Lucie Flek
73
3
0
05 Sep 2022
Unlearning Protected User Attributes in Recommendations with Adversarial
  Training
Unlearning Protected User Attributes in Recommendations with Adversarial Training
Christian Ganhor
D. Penz
Navid Rekabsaz
Oleg Lesota
Markus Schedl
FaMLMU
51
42
0
09 Jun 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
94
19
0
20 Apr 2022
How reparametrization trick broke differentially-private text
  representation learning
How reparametrization trick broke differentially-private text representation learning
Ivan Habernal
49
13
0
24 Feb 2022
CAPE: Context-Aware Private Embeddings for Private Language Learning
CAPE: Context-Aware Private Embeddings for Private Language Learning
Richard Plant
Dimitra Gkatzia
V. Giuffrida
80
27
0
27 Aug 2021
Automatic de-identification of Data Download Packages
Automatic de-identification of Data Download Packages
L. Boeschoten
Roos Voorvaart
Casper S. Kaandorp
Ruben van den Goorbergh
M. Vos
30
11
0
04 May 2021
Examining the Feasibility of Off-the-Shelf Algorithms for Masking
  Directly Identifiable Information in Social Media Data
Examining the Feasibility of Off-the-Shelf Algorithms for Masking Directly Identifiable Information in Social Media Data
Rachel Dorn
A. Nobles
Masoud Rouhizadeh
Mark Dredze
18
2
0
16 Nov 2020
Social Science Guided Feature Engineering: A Novel Approach to Signed
  Link Analysis
Social Science Guided Feature Engineering: A Novel Approach to Signed Link Analysis
Ghazaleh Beigi
Jiliang Tang
Huan Liu
104
7
0
04 Jan 2020
Privacy-Aware Recommendation with Private-Attribute Protection using
  Adversarial Learning
Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning
Ghazaleh Beigi
Ahmadreza Mosallanezhad
Ruocheng Guo
Hamidreza Alvari
A. Nou
Huan Liu
SILM
85
69
0
22 Nov 2019
1