ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.14851
  4. Cited By
Meta Knowledge Condensation for Federated Learning

Meta Knowledge Condensation for Federated Learning

29 September 2022
Ping Liu
Xin Yu
Joey Tianyi Zhou
    DD
    FedML
ArXivPDFHTML

Papers citing "Meta Knowledge Condensation for Federated Learning"

22 / 22 papers shown
Title
Subgraph Federated Learning for Local Generalization
Sungwon Kim
Yoonho Lee
Yunhak Oh
Namkyeong Lee
Sukwon Yun
Junseok Lee
Sein Kim
Carl Yang
Chanyoung Park
FedML
OOD
79
1
0
06 Mar 2025
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Junying Chen
Zhenyang Cai
Ke Ji
X. Wang
Wanlong Liu
Rongsheng Wang
Jianye Hou
Benyou Wang
LRM
28
33
0
25 Dec 2024
Tackling Data Heterogeneity in Federated Time Series Forecasting
Tackling Data Heterogeneity in Federated Time Series Forecasting
Wei Yuan
Guanhua Ye
Xiangyu Zhao
Quoc Viet Hung Nguyen
Yang Cao
Hongzhi Yin
AI4TS
69
0
0
24 Nov 2024
Dataset Distillation-based Hybrid Federated Learning on Non-IID Data
Dataset Distillation-based Hybrid Federated Learning on Non-IID Data
Xiufang Shi
Wei Zhang
Mincheng Wu
Guangyi Liu
Z. Wen
Shibo He
Tejal Shah
R. Ranjan
DD
FedML
21
1
0
26 Sep 2024
Not All Samples Should Be Utilized Equally: Towards Understanding and
  Improving Dataset Distillation
Not All Samples Should Be Utilized Equally: Towards Understanding and Improving Dataset Distillation
Shaobo Wang
Yantai Yang
Qilong Wang
Kaixin Li
Linfeng Zhang
Junchi Yan
DD
41
4
0
22 Aug 2024
One-Shot Collaborative Data Distillation
One-Shot Collaborative Data Distillation
William Holland
Chandra Thapa
Sarah Ali Siddiqui
Wei Shao
S. Çamtepe
DD
FedML
30
0
0
05 Aug 2024
Speech Emotion Recognition under Resource Constraints with Data
  Distillation
Speech Emotion Recognition under Resource Constraints with Data Distillation
Yi Chang
Zhao Ren
Zhonghao Zhao
Thanh Tam Nguyen
Kun Qian
Tanja Schultz
Björn W. Schuller
19
0
0
21 Jun 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
38
3
0
02 May 2024
An Aggregation-Free Federated Learning for Tackling Data Heterogeneity
An Aggregation-Free Federated Learning for Tackling Data Heterogeneity
Yuan Wang
Huazhu Fu
Renuga Kanagavelu
Qingsong Wei
Yong Liu
Rick Siow Mong Goh
FedML
18
28
0
29 Apr 2024
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset
  Distillation
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Yifan Wu
Jiawei Du
Ping Liu
Yuewei Lin
Wenqing Cheng
Wei-ping Xu
DD
AAML
38
5
0
20 Mar 2024
Group Distributionally Robust Dataset Distillation with Risk Minimization
Group Distributionally Robust Dataset Distillation with Risk Minimization
Saeed Vahidian
Mingyu Wang
Jianyang Gu
Vyacheslav Kungurtsev
Wei Jiang
Yiran Chen
OOD
DD
31
6
0
07 Feb 2024
Importance-Aware Adaptive Dataset Distillation
Importance-Aware Adaptive Dataset Distillation
Guang Li
Ren Togo
Takahiro Ogawa
Miki Haseyama
DD
19
6
0
29 Jan 2024
Efficient Dataset Distillation via Minimax Diffusion
Efficient Dataset Distillation via Minimax Diffusion
Jianyang Gu
Saeed Vahidian
Vyacheslav Kungurtsev
Haonan Wang
Wei Jiang
Yang You
Yiran Chen
DD
35
26
0
27 Nov 2023
Sequential Subset Matching for Dataset Distillation
Sequential Subset Matching for Dataset Distillation
Jiawei Du
Qin Shi
Joey Tianyi Zhou
DD
29
27
0
02 Nov 2023
AST: Effective Dataset Distillation through Alignment with Smooth and
  High-Quality Expert Trajectories
AST: Effective Dataset Distillation through Alignment with Smooth and High-Quality Expert Trajectories
Jiyuan Shen
Wenzhuo Yang
Kwok-Yan Lam
DD
23
1
0
16 Oct 2023
Can pre-trained models assist in dataset distillation?
Can pre-trained models assist in dataset distillation?
Yao Lu
Xuguang Chen
Yuchen Zhang
Jianyang Gu
Tianle Zhang
Yifan Zhang
Xiaoniu Yang
Qi Xuan
Kai Wang
Yang You
DD
32
10
0
05 Oct 2023
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
26
22
0
20 Jul 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
35
121
0
17 Jan 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Data Distillation: A Survey
Data Distillation: A Survey
Noveen Sachdeva
Julian McAuley
DD
22
73
0
11 Jan 2023
Dataset Condensation with Differentiable Siamese Augmentation
Dataset Condensation with Differentiable Siamese Augmentation
Bo-Lu Zhao
Hakan Bilen
DD
189
288
0
16 Feb 2021
The Advantage of Conditional Meta-Learning for Biased Regularization and
  Fine-Tuning
The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning
Giulia Denevi
Massimiliano Pontil
C. Ciliberto
32
39
0
25 Aug 2020
1