Skip to content

Latest commit

 

History

History
40 lines (32 loc) · 1.99 KB

README.md

File metadata and controls

40 lines (32 loc) · 1.99 KB

FedConcat

This is source code for Exploiting Label Skew in Federated Learning with Model Concatenation.

An example running script is in run.sh.

Parameter Description
model The model architecture. Options: simple-cnn, vgg, resnet, mlp. Default = mlp.
dataset Dataset to use. Options: mnist, cifar10, fmnist, svhn, generated, femnist, a9a, rcv1, covtype. Default = mnist.
alg The training algorithm. Options: fedconcat.
lr Learning rate for the local models, default = 0.01.
batch-size Batch size, default = 64.
epochs Number of local training epochs, default = 5.
n_parties Number of parties, default = 2.
rho The parameter controlling the momentum SGD, default = 0.
n_clusters Number of clusters.
encoder_round Number of communication rounds for training encoders in each cluster.
classifier_round Number of communication rounds for training the global classifier.
partition The partition way. Options: homo, noniid-labeldir, noniid-#label1 (or 2, 3, ..., which means the fixed number of labels each party owns)
beta The concentration parameter of the Dirichlet distribution for heterogeneous partition, default = 0.5.
device Specify the device to run the program, default = cuda:0.
datadir The path of the dataset, default = ./data/.
logdir The path to store the logs, default = ./logs/.
init_seed The initial seed, default = 0.

Citation

If you find this repository useful, please cite our paper:

@article{diao2023exploiting,
  title={Exploiting Label Skews in Federated Learning with Model Concatenation},
  author={Diao, Yiqun and Li, Qinbin and He, Bingsheng},
  journal={arXiv preprint arXiv:2312.06290},
  year={2023}
}