This repository is the official Pytorch implementation DEMO of FedAgg:
Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration. IEEE International Conference on Computer Communications (INFOCOM). 2024 (Accepted)
python main_fedagg.py
@inproceedings{wu2024agglomerative,
title={Agglomerative federated learning: Empowering larger model training via end-edge-cloud collaboration},
author={Wu, Zhiyuan and Sun, Sheng and Wang, Yuwei and Liu, Min and Gao, Bo and Pan, Quyang and He, Tianliu and Jiang, Xuefeng},
booktitle={IEEE INFOCOM 2024-IEEE Conference on Computer Communications},
pages={131--140},
year={2024},
organization={IEEE}
}
FedICT: Federated Multi-task Distillation for Multi-access Edge Computing. IEEE Transactions on Parallel and Distributed Systems (TPDS). 2024
FedCache: A Knowledge Cache-driven Federated Learning Architecture for Personalized Edge Intelligence. IEEE Transactions on Mobile Computing (TMC). 2024
Exploring the Distributed Knowledge Congruence in Proxy-data-free Federated Distillation. ACM Transactions on Intelligent Systems and Technology (TIST). 2024.
FedCache 2.0: Exploiting the Potential of Distilled Data in Knowledge Cache-driven Federated Learning. arXiv preprint arXiv:2405.13378. 2024.
Privacy-Enhanced Training-as-a-Service for On-Device Intelligence: Concept, Architectural Scheme, and Open Problems. arXiv preprint arXiv:2404.10255. 2024.
Federated Class-Incremental Learning with New-Class Augmented Self-Distillation. arXiv preprint arXiv:2401.00622. 2024.
Knowledge Distillation in Federated Edge Learning: A Survey. arXiv preprint arXiv:2301.05849. 2023.
We thank Zeju Li from Beijing University of Posts and Telecommunications, Sijie Cheng from Tsinghua University, Tian Wen, Wen Wang and Yufeng Chen from Institute of Computing Technology, Chinese Academy of Sciences, Jinda Lu from the University of Science and Technology of China for inspiring suggestions.