ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.07152
  4. Cited By
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML

No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML

11 October 2023
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
    FedML
ArXivPDFHTML

Papers citing "No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML"

10 / 10 papers shown
Title
An Early Experience with Confidential Computing Architecture for On-Device Model Protection
An Early Experience with Confidential Computing Architecture for On-Device Model Protection
Sina Abdollahi
Mohammad Maheri
S. Siby
Marios Kogias
Hamed Haddadi
31
0
0
11 Apr 2025
DITING: A Static Analyzer for Identifying Bad Partitioning Issues in TEE Applications
DITING: A Static Analyzer for Identifying Bad Partitioning Issues in TEE Applications
Chengyan Ma
Ruidong Han
Ye Liu
Yuqing Niu
Di Lu
Chuang Tian
Jianfeng Ma
Debin Gao
David Lo
50
0
0
24 Feb 2025
Graph in the Vault: Protecting Edge GNN Inference with Trusted Execution Environment
Graph in the Vault: Protecting Edge GNN Inference with Trusted Execution Environment
Ruyi Ding
Tianhong Xu
A. A. Ding
Yunsi Fei
FedML
36
0
0
20 Feb 2025
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
32
2
0
15 Nov 2024
CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model
  Stealing in Edge Deployment
CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment
Qinfeng Li
Yangfan Xie
Tianyu Du
Zhiqiang Shen
Zhenghan Qin
Hao Peng
Xinkui Zhao
Xianwei Zhu
Jianwei Yin
Xuhong Zhang
13
2
0
16 Oct 2024
Ascend-CC: Confidential Computing on Heterogeneous NPU for Emerging
  Generative AI Workloads
Ascend-CC: Confidential Computing on Heterogeneous NPU for Emerging Generative AI Workloads
Aritra Dhar
Clément Thorens
Lara Magdalena Lazier
Lukas Cavigelli
33
0
0
16 Jul 2024
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
Vasisht Duddu
Oskari Jarvinen
Lachlan J. Gunn
Nirmal Asokan
53
1
0
25 Jun 2024
TransLinkGuard: Safeguarding Transformer Models Against Model Stealing
  in Edge Deployment
TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment
Qinfeng Li
Zhiqiang Shen
Zhenghan Qin
Yangfan Xie
Xuhong Zhang
Tianyu Du
Jianwei Yin
19
8
0
17 Apr 2024
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute
  Inference Attacks on Classification Models
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
27
60
0
23 Jan 2022
Slalom: Fast, Verifiable and Private Execution of Neural Networks in
  Trusted Hardware
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Florian Tramèr
Dan Boneh
FedML
112
395
0
08 Jun 2018
1