ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.16732
  4. Cited By
Flash: A Hybrid Private Inference Protocol for Deep CNNs with High Accuracy and Low Latency on CPU

Flash: A Hybrid Private Inference Protocol for Deep CNNs with High Accuracy and Low Latency on CPU

20 January 2025
H. Roh
Jinsu Yeo
Yeongil Ko
Gu-Yeon Wei
David Brooks
Woo-Seok Choi
ArXivPDFHTML

Papers citing "Flash: A Hybrid Private Inference Protocol for Deep CNNs with High Accuracy and Low Latency on CPU"

3 / 3 papers shown
Title
Hyena: Optimizing Homomorphically Encrypted Convolution for Private CNN Inference
Hyena: Optimizing Homomorphically Encrypted Convolution for Private CNN Inference
H. Roh
Woo-Seok Choi
45
1
0
21 Nov 2023
Impala: Low-Latency, Communication-Efficient Private Deep Learning
  Inference
Impala: Low-Latency, Communication-Efficient Private Deep Learning Inference
Woojin Choi
Brandon Reagen
Gu-Yeon Wei
David Brooks
FedML
45
7
0
13 May 2022
F1: A Fast and Programmable Accelerator for Fully Homomorphic Encryption
  (Extended Version)
F1: A Fast and Programmable Accelerator for Fully Homomorphic Encryption (Extended Version)
Axel S. Feldmann
Nikola Samardzic
A. Krastev
S. Devadas
R. Dreslinski
Karim M. El Defrawy
Nicholas Genise
Chris Peikert
Daniel Sánchez
35
251
0
11 Sep 2021
1