Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EVA for ML tasks #10

Open
PereteanuGeorge opened this issue Apr 15, 2021 · 2 comments
Open

EVA for ML tasks #10

PereteanuGeorge opened this issue Apr 15, 2021 · 2 comments

Comments

@PereteanuGeorge
Copy link

PereteanuGeorge commented Apr 15, 2021

Hello,

I am trying to use EVA for a simple and encrypted MNIST model classifier.

The code for my ConvNet is the following

class ConvNet(torch.nn.Module):
    def __init__(self, hidden=64, output=10):
        super(ConvNet, self).__init__()
        torch.nn.Sequential()
        self.conv1 = torch.nn.Conv2d(1, 4, kernel_size=7, padding=0, stride=3)
        self.fc1 = torch.nn.Linear(256, hidden)
        self.fc2 = torch.nn.Linear(hidden, output)

    def forward(self, x):
        x = self.conv1(x)
        x = x * x
        x = x.view(-1, 256)
        x = self.fc1(x)
        x = x * x
        x = self.fc2(x)
        return x

However, having this simple piece of code

prog = EvaProgram('prog', vec_size=32*32)
with prog:
    image = Input('image')
    result = model(image)
    probs = torch.softmax(torch.tensor(result), 0)
    label_max = torch.argmax(probs)
    print(f'label_max type {type(label_max)}')
    print(f'label_max value {label_max}')
    Output('label_max', label_max.numpy())

Throws me this error: TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not Expr

If however I replace the code for EVA with:

prog = EvaProgram('prog', vec_size=32*32)
with prog:
    result = model(image)
    probs = torch.softmax(torch.tensor(result), 0)
    label_max = torch.argmax(probs)
    print(f'label_max type {type(label_max)}')
    print(f'label_max value {label_max}')
    Output('label_max', label_max.numpy())

It gives me TypeError: No conversion to Term available for 0.

I get what both errors mean but I couldn't find any way of how to solve them. I was wondering if EVA supports ML tasks and if there any concrete examples other the one with image_processing, Thanks a lot!

@PereteanuGeorge
Copy link
Author

For simplicity of reproduction I will post the whole code:

import torch
from torchvision import datasets
import torchvision.transforms as transforms

torch.manual_seed(73)

train_data = datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor())
test_data = datasets.MNIST('data', train=False, download=True, transform=transforms.ToTensor())

batch_size = 64

train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=True)

cuda_dev = '0'  # GPU device 0 (can be changed if multiple GPUs are available)

use_cuda = torch.cuda.is_available()
device = torch.device("cuda:" + cuda_dev if use_cuda else "cpu")

print('Device: ' + str(device))
if use_cuda:
    print('GPU: ' + str(torch.cuda.get_device_name(int(cuda_dev))))


class ConvNet(torch.nn.Module):
    def __init__(self, hidden=64, output=10):
        super(ConvNet, self).__init__()
        torch.nn.Sequential()
        self.conv1 = torch.nn.Conv2d(1, 4, kernel_size=7, padding=0, stride=3)
        self.fc1 = torch.nn.Linear(256, hidden)
        self.fc2 = torch.nn.Linear(hidden, output)

    def forward(self, x):
        x = self.conv1(x)
        x = x * x
        x = x.view(-1, 256)
        x = self.fc1(x)
        x = x * x
        x = self.fc2(x)
        return x


def train(model, train_loader, criterion, optimizer, n_epochs=10):
    # model in training mode
    model.train()
    for epoch in range(1, n_epochs + 1):

        train_loss = 0.0
        for data, target in train_loader:
            optimizer.zero_grad()
            data = data.to(device)
            target = target.to(device)
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()
            train_loss += loss.item()

        # calculate average losses
        train_loss = train_loss / len(train_loader)

        print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))

    # model in evaluation mode
    model.eval()
    return model


model = ConvNet().to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
model = train(model, train_loader, criterion, optimizer, 10)

def load_input():
    idx = randint(1, len(test_loader))
    i = 0
    for data, target in test_loader:
        if i == idx:
            data_to_return = data
            target_to_return = target
        i += 1
    return data_to_return, target_to_return


image, target = load_input()
print(f'target tensor is {target}')

prog = EvaProgram('prog', vec_size=32 * 32)
with prog:
    image = Input('image')
    result = model(image)
    probs = torch.softmax(torch.tensor(result), 0)
    label_max = torch.argmax(probs)
    print(f'label_max type {type(label_max)}')
    print(f'label_max value {label_max}')
    Output('label_max', label_max.numpy())

prog.set_output_ranges(30)
prog.set_input_scales(30)

if __name__ == "__main__":
    input = {'preds': image}
    compiler = CKKSCompiler()
    compiled, params, signature = compiler.compile(prog)
    public_ctx, secret_ctx = generate_keys(params)
    enc_inputs = public_ctx.encrypt(input, signature)
    enc_outputs = public_ctx.execute(compiled, enc_inputs)
    outputs = secret_ctx.decrypt(enc_outputs, signature)
    print(f'expected {target}')
    print(f'got {outputs}')

@olsaarik
Copy link
Contributor

Hi George!

Unfortunately performing ML tasks with EVA is not as straightforward as passing Expr instances from an EvaProgram into an existing ML library. The fundamental reason for this is that homomorphic encryption does not offer all the operations that for example a Tensor in PyTorch does. For example, random access is not directly supported. EVA's Expr instances represent vectors of vec_size approximate fixed-point values with the following operations:

  • +, - and * for pointwise addition, subtraction and multiplication. Unary negation is also supported. The exponent ** operation expands to a tree of multiplications.
  • << and >> with a constant for vector rotation to left or right.
  • Turning lists of numbers into Expr's representing a constant.

You can emulate additional operations with these, such as random access to constant indices with << and multiplication with a mask of 1's and 0's, but doing so comes at a high cost. Generally to get good performance you have to redesign the basic ML operations, such as convolution, specifically for homomorphic encryption. We did some of this work in our previous project CHET (you can find a video here), but unfortunately we don't have an open source version of that.

A secondary reason existing ML libraries would not work is that when building an EvaProgram, nothing is immediately executed, but instead EVA traces the user's code and stores a DAG of operations to be executed. Only when you compile and execute the program do the actual operations happen. To make this work, the ML framework's functions for executing the model would have to be hooked up to EVA.

Generally the approach to adapting an existing ML framework to use EVA is to treat it like a new form of AI accelerator (or decelerator rather) and implement a new backend. For example for ONNX Runtime you would implement a new Execution Provider. This might still not be a very clean operation, as homomorphic encryption might not fulfill all the assumptions frameworks make of their backends. For example:

  • Inputs may now be encrypted, so the EVA backend would need its own tensor representation all the way from the public API, while the framework may assume all backends can consume a common format.
  • Encrypted values cannot be displayed, so any printing and debugging functionality may have to be modified.

I do think private AI using homomorphic encryption is a very exciting prospect and many scenarios (especially around inferencing) can already provide valuable privacy benefits at a reasonable enough cost. However, there is still significant work to be done on the tooling side to make these kinds of applications easy to develop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants