ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.00098
  4. Cited By
v1v2 (latest)

Training with the Invisibles: Obfuscating Images to Share Safely for Learning Visual Recognition Models

1 January 2019
Tae-Hoon Kim
Dongmin Kang
K. Pulli
Jonghyun Choi
ArXiv (abs)PDFHTML

Papers citing "Training with the Invisibles: Obfuscating Images to Share Safely for Learning Visual Recognition Models"

4 / 4 papers shown
Title
Privacy Safe Representation Learning via Frequency Filtering Encoder
Privacy Safe Representation Learning via Frequency Filtering Encoder
J. Jeong
Minyong Cho
Philipp Benz
Jinwoo Hwang
J. Kim
Seungkwang Lee
Tae-Hoon Kim
73
3
0
04 Aug 2022
Deep Poisoning: Towards Robust Image Data Sharing against Visual
  Disclosure
Deep Poisoning: Towards Robust Image Data Sharing against Visual Disclosure
Haojie Guo
Brian Dolhansky
Eric Hsin
Phong Dinh
Cristian Canton Ferrer
Song Wang
FedML
84
2
0
14 Dec 2019
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias
  in Deep Image Representations
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
Tianlu Wang
Jieyu Zhao
Mark Yatskar
Kai-Wei Chang
Vicente Ordonez
FaML
122
17
0
20 Nov 2018
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GANAAML
243
612
0
31 Oct 2017
1