ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08631
  4. Cited By
FCert: Certifiably Robust Few-Shot Classification in the Era of
  Foundation Models

FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models

12 April 2024
Yanting Wang
Wei Zou
Jinyuan Jia
ArXivPDFHTML

Papers citing "FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models"

5 / 5 papers shown
Title
Does Few-shot Learning Suffer from Backdoor Attacks?
Does Few-shot Learning Suffer from Backdoor Attacks?
Xinwei Liu
Xiaojun Jia
Jindong Gu
Yuan Xun
Siyuan Liang
Xiaochun Cao
78
18
0
31 Dec 2023
FLCert: Provably Secure Federated Learning against Poisoning Attacks
FLCert: Provably Secure Federated Learning against Poisoning Attacks
Xiaoyu Cao
Zaixi Zhang
Jinyuan Jia
Neil Zhenqiang Gong
FedML
OOD
71
59
0
02 Oct 2022
Smoothed Embeddings for Certified Few-Shot Learning
Smoothed Embeddings for Certified Few-Shot Learning
Mikhail Aleksandrovich Pautov
Olesya Kuznetsova
Nurislam Tursynbek
Aleksandr Petiushko
Ivan V. Oseledets
14
5
0
02 Feb 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
86
50
0
13 Oct 2021
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
286
0
02 Dec 2018
1