Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment.
This code release is aimed at two target audiences:
- ML practitioners will find this to be a gentle introduction to training a model with differential privacy as it requires minimal code changes.
- Differential Privacy researchers will find this easy to experiment and tinker with, allowing them to focus on what matters.
The latest release of Opacus can be installed via pip
:
pip install opacus
OR, alternatively, via conda
:
conda install -c conda-forge opacus
You can also install directly from the source for the latest features (along with its quirks and potentially occasional bugs):
git clone https://github.com/pytorch/opacus.git
cd opacus
pip install -e .
To train your model with differential privacy, all you need to do is to instantiate a PrivacyEngine
and pass your model, data_loader, and optimizer to the engine's make_private()
method to obtain their private counterparts.
# define your components as usual
model = Net()
optimizer = SGD(model.parameters(), lr=0.05)
data_loader = torch.utils.data.DataLoader(dataset, batch_size=1024)
# enter PrivacyEngine
privacy_engine = PrivacyEngine()
model, optimizer, data_loader = privacy_engine.make_private(
module=model,
optimizer=optimizer,
data_loader=data_loader,
noise_multiplier=1.1,
max_grad_norm=1.0,
)
# Now it's business as usual
The MNIST example shows an end-to-end run using Opacus. The examples folder contains more such examples.
Opacus 1.0 introduced many improvements to the library, but also some breaking changes. If you've been using Opacus 0.x and want to update to the latest release, please use this Migration Guide
We've built a series of IPython-based tutorials as a gentle introduction to training models with privacy and using various Opacus features.
- Building an Image Classifier with Differential Privacy
- Training a differentially private LSTM model for name classification
- Building text classifier with Differential Privacy on BERT
- Opacus Guide: Introduction to advanced features
- Opacus Guide: Grad samplers
- Opacus Guide: Module Validator and Fixer
The technical report introducing Opacus, presenting its design principles, mathematical foundations, and benchmarks can be found here.
Consider citing the report if you use Opacus in your papers, as follows:
@article{opacus,
title={Opacus: {U}ser-Friendly Differential Privacy Library in {PyTorch}},
author={Ashkan Yousefpour and Igor Shilov and Alexandre Sablayrolles and Davide Testuggine and Karthik Prasad and Mani Malek and John Nguyen and Sayan Ghosh and Akash Bharadwaj and Jessica Zhao and Graham Cormode and Ilya Mironov},
journal={arXiv preprint arXiv:2109.12298},
year={2021}
}
If you want to learn more about DP-SGD and related topics, check out our series of blogposts and talks:
- Differential Privacy Series Part 1 | DP-SGD Algorithm Explained
- Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus
- PriCon 2020 Tutorial: Differentially Private Model Training with Opacus
- Differential Privacy on PyTorch | PyTorch Developer Day 2020
- Opacus v1.0 Highlights | PyTorch Developer Day 2021
Check out the FAQ page for answers to some of the most frequently asked questions about differential privacy and Opacus.
See the CONTRIBUTING file for how to help out. Do also check out the README files inside the repo to learn how the code is organized.
This code is released under Apache 2.0, as found in the LICENSE file.