Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selection issues #1

Open
gotobcn8 opened this issue Aug 11, 2024 · 0 comments
Open

Selection issues #1

gotobcn8 opened this issue Aug 11, 2024 · 0 comments

Comments

@gotobcn8
Copy link

            if self.server_solver.do_selection:
                for s in range(self.num_clusters):
                    selection.append(np.random.choice(a=range(self.num_clients), size=self.server_solver.selection_size,
                                                      p=self.importance_weights_matrix[:, s], replace=False).tolist())
                logger.log_client_selection(self.exp_id, t, self._idx_to_id(selection))
            else:
                selection = np.tile(range(self.num_clients), reps=(self.num_clusters, 1))

            # Local updates
            for k in np.unique(np.concatenate(selection).ravel()):
                self.client_vec[k].run()

In this code, if server_solver.do_selection, you will select repeatitive clients right? and following that you use unique() to remove the repeaetitive one. It's possibly selection >= server_solver.selection_size.
But the question is when you aggregate the cluster. why you just do the repetitive operations for each cluster?
Plz see my notations.

def _aggregate_fedsoft(self, selection):
        for s in range(self.num_clusters):
            next_weights = self.generate_zero_weights()
            for k in selection[s]:
                if self.server_solver.do_selection:
                    # why the weight is 1 / server_solver.selection_size not len(selection)?
                    aggregation_weight = 1. / self.server_solver.selection_size
                else:
                    aggregation_weight = self.importance_weights_matrix[k][s]
                client_weights = self.client_vec[k].get_model_dict()
                for key in next_weights.keys():
                    # why you just do aggregation for selection in cluster_num times?
                    next_weights[key] += aggregation_weight * client_weights[key].cpu()
            self.cluster_vec[s].load_state_dict(state_dict=next_weights)

After I run the code, it really confused me cuz not as my expectation.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant