Skip to content

YongzheJia/DapperFL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices

Roadmap Python3 PyTorch

This repository is the official implementation of [DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices]. (Accepted by NeurIPS 2024)

DapperFL

Requirements

To install requirements:

pip install -r requirements.txt

We use wandb to keep a log of our experiments. If you don't have a wandb account, just install it and use it as offline mode.

pip install wandb
wandb off

Training & Evaluation

To train the model(s) in the paper, run this command:

python ./fedml_experiments/standalone/domain_generalization/main.py \
       --model dapperfl 
       --dataset fl_officecaltech 
       --backbone resnet18

Arguments

You can modify the arguments to run DapperFL on other settings. The arguments are described as follows:

Arguments Description
prefix A prefix for logging.
communication_epoch Total communication rounds of Federated Learning.
local_epoch Local epochs for local model updating.
parti_num Number of participants.
model Name of FL framework.
dataset Datasets used in the experiment. Options: fl_officecaltech, fl_digits.
pr_strategy Pruning ratio used to prune local models. Options: 0 (without pruning), 0.1 ~ 0.9, AD (adaptive pruning).
backbone Backbone global model. Options: resnet10, resnet18.
alpha Coefficient alpha in co-pruning. Default: 0.9.
alpha_min Coefficient alpha_min in co-pruning. Default: 0.1.
epsilon Coefficient epsilon in co-pruning. Default: 0.2.
reg_coeff Coefficient for L2 regularization. Default: 0.01.
seed Random seed.

About

Official implementation of DapperFL.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages