Skip to content

Commit

Permalink
chore: Merge branch 'main' into gpu_update_4608
Browse files Browse the repository at this point in the history
  • Loading branch information
kcelia committed Sep 25, 2024
2 parents 46b831e + 4556ed3 commit 218c666
Show file tree
Hide file tree
Showing 7 changed files with 79 additions and 47 deletions.
2 changes: 1 addition & 1 deletion docs/advanced_examples/LogisticRegressionTraining.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Logistic Regression Training\n",
"# Logistic Regression Training on Encrypted Dataset\n",
"\n",
"This notebook shows how to train a logistic regression model on encrypted data using stochastic gradient descent (SGD). During this process,\n",
"the training set remains encrypted at all times and the gradients and loss are encrypted, thus unaccessible by the server performing the training. \n",
Expand Down
20 changes: 12 additions & 8 deletions docs/guides/hybrid-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,12 @@ class FCSmall(nn.Module):
def forward(self, x):
return self.seq(x)

model = FCSmall(10)
dim = 10
model = FCSmall(dim)
model_name = "FCSmall"
submodule_name = "seq.0"

inputs = torch.Tensor(np.random.uniform(size=(10, 10)))
inputs = torch.Tensor(np.random.uniform(size=(10, dim)))
# Prints ['', 'seq', 'seq.0', 'seq.1', 'seq.2']
print([k for (k, _) in model.named_modules()])

Expand All @@ -68,7 +69,6 @@ hybrid_model.compile_model(
n_bits=8,
)


models_dir = Path(os.path.abspath('')) / "compiled_models"
models_dir.mkdir(exist_ok=True)
model_dir = models_dir / model_name
Expand Down Expand Up @@ -110,7 +110,7 @@ You can develop a client application that deploys a model with hybrid deployment
```python
# Modify model to use remote FHE server instead of local weights
hybrid_model = HybridFHEModel(
model,
model, # PyTorch or Brevitas model
submodule_name,
server_remote_address="http://0.0.0.0:8000",
model_name=f"{model_name}",
Expand All @@ -122,7 +122,7 @@ Next, obtain the parameters necessary to encrypt and quantize data, as detailed

<!--pytest-codeblocks:skip-->

```
```python
path_to_clients = Path(__file__).parent / "clients"
hybrid_model.init_client(path_to_clients=path_to_clients)
```
Expand All @@ -131,7 +131,7 @@ When the client application is ready to make inference requests to the server, s

<!--pytest-codeblocks:skip-->

```
```python
for module in hybrid_model.remote_modules.values():
module.fhe_local_mode = HybridFHEMode.REMOTE
```
Expand All @@ -141,10 +141,14 @@ For inference with the `HybridFHEModel` instance, `hybrid_model`, call the regul
<!--pytest-codeblocks:skip-->

```python
hybrid_model.forward(torch.randn((dim, )))
hybrid_model(torch.randn((dim, )))
```

When calling `forward`, the `HybridFHEModel` handles all the necessary intermediate steps for each model part deployed remotely, including:
<!-- Add a forward method to hybridfhemodel?-->

<!-- FIXME: https://github.com/zama-ai/concrete-ml-internal/issues/4579-->

When calling `HybridFHEModel`, it handles all the necessary intermediate steps for each model part deployed remotely, including:

- Quantizing the data.
- Encrypting the data.
Expand Down
20 changes: 10 additions & 10 deletions docs/guides/using_gpu.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# GPU acceleration

Concrete ML can compile both built-in and custom models using a CUDA-accelerated backend. Once
a model is compiled for CUDA, executing it on a non-CUDA enabled machine will result in
an error being raised.
This document provides a complete instruction on using GPU acceleration with Concrete ML.

Concrete ML can compile both built-in and custom models using a CUDA-accelerated backend. However, once
a model is compiled for CUDA, executing it on a non-CUDA-enabled machine will raise an error.

## Support

Expand All @@ -12,20 +13,19 @@ an error being raised.
| | | | | |

{% hint style="warning" %}
When compiling a model for GPU it will be assigned gpu-specific crypto-system parameters. GPU-compatible
parameters are more constrained than CPU ones so, for some models, the Concrete compiler might have more difficulty finding GPU-compatible crypto-parmaters, resulting in the `NoParametersFound` error.
When compiling a model for GPU, it will be assigned GPU-specific crypto-system parameters. These parameters are more constrained compared to CPU-specific ones.
As a result, the Concrete compiler may have difficulty finding suitable GPU-compatible crypto-parmaters for some models, leading to a `NoParametersFound` error.
{% endhint %}

## Performance

Performance gains between 1x-10x can be obtained on
high end GPUs such as V100, A100, H100, when compared to a desktop CPU. Compared to a high-end 64 or 96-core server CPU, speed-ups are around 1x-3x.
On high-end GPUs like V100, A100, or H100, the performance gains range from 1x to 10x compared to a desktop CPU. When compared to a high-end 64 or 96-core server CPU, the speed-up is typically around 1x to 3x.

On consumer grade GPUs such as GTX40xx or GTX30xx there may be
little speedup or even a slowdown compared to executing
On consumer grade GPUs such as GTX40xx or GTX30xx, there may be
little speedup or even a slowdown compared to execution
on a desktop CPU.

## Usage preqreuisites
## Prerequisites

To use the CUDA enabled backend you need to install the GPU-enabled Concrete compiler:

Expand Down
53 changes: 32 additions & 21 deletions script/make_utils/fix_omp_issues_for_intel_mac.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ MACHINE=$(uname -m)
PYTHON_VERSION=$(python --version | cut -d' ' -f2 | cut -d'.' -f1,2)
DO_REGENERATE=0

if [ "$UNAME" == "Darwin" ] && [ "$MACHINE" != "arm64" ]
if [ "$UNAME" == "Darwin" ]
then

# We need to source the venv here, since it's not done in the CI
Expand Down Expand Up @@ -41,31 +41,42 @@ then
exit 255
fi

# The error is specific to python version
if [ "$PYTHON_VERSION" == "3.8" ] || [ "$PYTHON_VERSION" == "3.9" ]
# The error is specific to python version and HW
if [ "$MACHINE" == "arm64" ]
then
# Same fix for all python versions
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/.dylibs/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/.dylibs/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib

elif [ "$PYTHON_VERSION" == "3.10" ]
then
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/functorch/.dylibs/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/functorch/.dylibs/libiomp5.dylib
else
echo "Please have a look to libraries libiomp5.dylib related to torch and then"
echo "apply appropriate fix"
exit 255
# The error is specific to python version
if [ "$PYTHON_VERSION" == "3.8" ] || [ "$PYTHON_VERSION" == "3.9" ]
then
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/.dylibs/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/.dylibs/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
elif [ "$PYTHON_VERSION" == "3.10" ]
then
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/xgboost/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/torch/lib/libiomp5.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/sklearn/.dylibs/libomp.dylib
rm "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/functorch/.dylibs/libiomp5.dylib
ln -s "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/site-packages/concrete/.dylibs/libomp.dylib "${WHICH_VENV}"/lib/"${WHICH_PYTHON}"/./site-packages/functorch/.dylibs/libiomp5.dylib
else
echo "Please have a look to libraries libiomp5.dylib related to torch and then"
echo "apply appropriate fix"
exit 255
fi
fi
fi
16 changes: 11 additions & 5 deletions src/concrete/ml/torch/hybrid_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -343,21 +343,27 @@ class HybridFHEModel:
This will modify the model in place.
Args:
model (nn.Module): The model to modify (in-place modification)
model (nn.Module): The model to modify (in-place modification).
module_names (Union[str, List[str]]): The module name(s) to replace with FHE server.
server_remote_address): The remote address of the FHE server
model_name (str): Model name identifier
verbose (int): If logs should be printed when interacting with FHE server
server_remote_address (str): The remote address of the FHE server.
model_name (str): Model name identifier.
verbose (int): If logs should be printed when interacting with FHE server.
Raises:
TypeError: If the provided model is not an instance of torch.nn.Module.
"""

def __init__(
self,
model: nn.Module,
module_names: Union[str, List[str]],
server_remote_address=None,
server_remote_address: Optional[str] = None,
model_name: str = "model",
verbose: int = 0,
):
if not isinstance(model, torch.nn.Module):
raise TypeError("The model must be a PyTorch or Brevitas model.")

self.model = model
self.module_names = [module_names] if isinstance(module_names, str) else module_names
self.server_remote_address = server_remote_address
Expand Down
13 changes: 12 additions & 1 deletion tests/torch/test_hybrid_converter.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,9 @@ def run_hybrid_llm_test(
# Get the temp directory path
hybrid_model.save_and_clear_private_info(temp_dir_path)
hybrid_model.set_fhe_mode("remote")

# At this point, the hybrid model does not have
# the parameters necessaryto run the module_names

module_names = module_names if isinstance(module_names, list) else [module_names]

# Check that files are there
Expand Down Expand Up @@ -229,3 +229,14 @@ def test_gpt2_hybrid_mlp_module_not_found():
fake_module_name = "does_not_exist"
with pytest.raises(ValueError, match=f"No module found for name {fake_module_name}"):
HybridFHEModel(model, module_names=fake_module_name)


def test_invalid_model():
"""Test that a TypeError is raised when the model is not a torch.nn.Module."""

# Create an invalid model (not a torch.nn.Module)
invalid_model = "This_is_not_a_model"

# Attempt to create a HybridFHEModel with an invalid model type and expect a TypeError
with pytest.raises(TypeError, match="The model must be a PyTorch or Brevitas model."):
HybridFHEModel(invalid_model, module_names="sub_module")
2 changes: 1 addition & 1 deletion use_case_examples/deployment/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ We show-case how to do this on 3 examples:
You can run these example locally using Docker, or on AWS if you have your credentials set up.

For all of them the workflow is the same:
0\. Optional: Train the model

1. Optional: Train the model
1. Compile the model to an FHE circuit
1. Deploy to AWS, Docker or localhost
1. Run the inference using the client (locally or in Docker)
Expand Down

0 comments on commit 218c666

Please sign in to comment.