Experimental Pytorch Code for the paper "Auto-FedRL: Federated Hyperparameter Optimization for Multi-institutional Medical Image Segmentation" (ECCV 2022)
python=3.6
pytorch=1.4.0
Please refer conda_environment.yml for more dependencies.
Federated learning (FL) is a distributed machine learning technique that enables collaborative model training while avoiding explicit data sharing. The inherent privacy-preserving property of FL algorithms makes them especially attractive to the medical field. However, in case of heterogeneous client data distributions, standard FL methods are unstable and require intensive hyperparameter tuning to achieve optimal performance. Conventional hyperparameter optimization algorithms are impractical in real-world FL applications as they involve numerous training trials, which are often not affordable with limited compute budgets. In this work, we propose an efficient reinforcement learning (RL)-based federated hyperparameter optimization algorithm, termed Auto-FedRL, in which an online RL agent can dynamically adjust hyperparameters of each client based on the current training progress.
git clone git@github.com:guopengf/Auto-FedRL.git
cd Auto-FedRL
conda env create -f conda_environment.yml
conda activate flpt14
The examples of training command for continuous search (CS) and CS MLP are provided in:
bash run_exp.sh
The commands related to hyperprameter search:
--Search 'enable the hyperparameter search, the default is discrete search'
--continuous_search 'enable continous search'
--continuous_search_drl 'enable continous search using MLP'
--rl_nettype 'select the network type from {mlp} for deep RL agent'
--search_lr 'enable client learning rate search'
--search_ne 'enable client iteration search'
--search_aw 'enable aggregation weights search'
--search_slr 'enable server learning rate search'
This code-base uses certain code-blocks and helper functions from FedMA, Auto-FedAvg and Mostafa et al..