Skip to content

Releases: LabeliaLabs/distributed-learning-contributivity

v0.4.2

23 Nov 16:51
170ed8a
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.4.1...v0.4.2

v0.4.1

17 Nov 10:48
22b29b2
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.4.0...v0.4.1

v0.4.0

03 Nov 09:54
6e84306
Compare
Choose a tag to compare

Changes

Main changes vs. v0.3.1:

  • #355 & #356: Upgrade to TF 2.5.1
  • #354: Add DRFA/AFL learning approach
  • #353: Plot data distribution
  • #352: Add single partner learning to available methods
  • #350: Add fashion mnist dataset
  • #345: Federated averaging with local and global optimizations
  • #342: Conduct first benchmarks on reference scenarios
  • #338: Configure nightly builds
  • #337: Change results saving behaviour with results saved by default
  • #335: Clarify documentation
  • #326: Update to Pytest 6.2.2
  • #322: Improve log format
  • #321: Define and describe reference scenarios for experiments and benchmarks
  • #320: Create a fully flexible Splitter enabling more advanced and realistic Scenarios
  • #318: Fix bug with aggregation_weighting schemes
  • #315: Faster versions of FedAvg, FedGrads and Smodel

Contributors

Many thanks to the following contributors and participants:

v.0.3.1

08 Jan 12:36
2e28840
Compare
Choose a tag to compare
  • Rollback to Python 3.6 as minimal requirement to ensure compatibility with Colab notebooks
  • Add a build with Python 3.6 environment in Github actions

v0.3.0

07 Jan 10:59
d0d8d23
Compare
Choose a tag to compare

MPLC v0.3.0 Release notes

This release introduces several changes to the library which is now offering more modularity. It is also deployed on PyPI.

Features

  • Multiple code refactorization: Refacto the multi-partner learning approaches in a more object oriented way. #255
  • New Experiment object: An object which runs and repeats several scenarios and gathers results to simplify their analysis. #275
  • New corruption methods and options: Refacto the corruption in a more object oriented way. Add random, permutation, duplication, redundancy ways to corrupt a dataset. #280 & #277
  • Updated datasets integration: Change module architecture. The datasets are now a subclass of the Dataset class. #262
  • Updated tutorial notebooks. #266
  • Documentation: Update and format documentation. #264 & #294
  • Add code coverage badge: A new badge on the README, to follow the extension of the test coverage. #268
  • More tests added: There is now end-to-end tests for contributivity methods and multi-partner learning approaches. #304 & #300
  • Add FederatedGradient: New multi-partner learning method. The gradients are aggregated (instead of the weights). Then the optimizer updates the model with the aggregated gradient. #299
  • Add S-model: New multi-partner learning method. It adds for each partner a NoiseAdaptative layer, which is supposed to adapt to potential label flip and thus improve the shared model performances, even with corrupted partners.
    #281 & #301
  • Add local/global option for test and validation datasets. The dataset is split (only once! and in a stratified way) between train/test/validation datasets. If ‘local’, these test/validation datasets are splitted between partners, in the exact same way than the training one. #288

Fixes

  • Fix bugs in corruption by permutation: A transposition was missing. #284
  • Update Cifar10 model optimizer: The learning rate was way too low. We also switched from RMSprop to Adam for the optimizer, as it showed better performances. #283
  • Fix early stopping: Condition for early stopping was not reachable. #279
  • Logs: Add epoch number when showing evaluation metrics. #274
  • Dependencies updates: Removal of Keras (We now exclusively use the tensorflow's built-in Keras module), upgrade to Tensor Flow v2.4.0, Numpy v1.19.4, Scikit Learn v0.23.2, Pandas v1.1.5. #290
  • Switch to Github Actions instead of Travis for Continuous Integration. #298
  • Normalize contributivity scores in step-by-step methods. #295

Contributors

This release received successful contributions from: