Skip to content

vtjeng/MIPVerify-converter

Repository files navigation

MIPVerify-converter

This repository provides a starting point for reading in models for verification with the MIPVerify.jl tool. It is updated for the v0.3.1 release.

Why do I need this?: The MIPVerify.jl package does not currently support loading the structure and weights of a saved model from tools like Tensorflow or Pytorch. Instead, users need to save the weights to a .mat file and specify the structure of the model in Julia. We describe how to do so, with accompanying code examples for two different models.

Quick Start

  1. ? -> .onnx:
  2. .onnx -> .mat:
    • convert.py extracts the model weights and biases, saving them to a.mat file.
      • You will need to install the Python packages specified in REQUIREMENTS.txt. This can either be done system-wide or using a virtual environment.
      • The networks folder provides examples of two models in .onnx format, with the corresponding .mat files produced by convert.py. (Note that the .mat binary files produced differ at the byte level from run to run, but contain the same data.)
# after installing REQUIREMENTS
$ ./convert.py -i networks/mnist_sample_1.onnx -o networks/mnist_sample_1.mat
  1. .mat -> MIPVerify-compatible model specification:
    • Specify the structure of the model using MIPVerify primitives in Julia. See check.jl for instructions, and mnist_sample_1.jl and `mnist-net_256x4.jl for examples.
    • Import the model weights (again, see check.jl) and validate that the model has the expected accuracy.

⚠️ You should need to write Julia code once for each architecture.

Common Issues

We list some common issues (in bold), along with possible causes.

Model Accuracy

  • Model has reasonable test accuracy but different from expected value.
    • Your model expects input in a range other than [0, 1] (e.g. [0, 255] is common for MNIST and CIFAR datasets)
    • Your model expects input with normalized mean and standard deviation, but the normalization operation has not been serialized in onnx.
    • Your model expects input images that are transposed vis-a-vis our dataset.
    • You have transposed the two dimensions processing the images along the height and the width respectively.
  • Model has test accuracy no better than random guessing.
    • If a convolution layer is present: you have reshaped the tensor for the weight of the convolution layer incorrectly.

Miscellaneous Errors

AssertionError: Number of output channels in matrix, 784, does not match number of output channels in bias, 256

This class of error message often occurs when the weights of the Linear layer are transposed relative to the convention expected by MIPVerify. Transpose the weights when importing them (see mnist-net_256x4.jl for an example of how.)

Sample Networks

The reference test accuracies of the sample MNIST classifier networks found in this repository are provided below.

Name Test Accuracy Source
mnist_sample_1.onnx 0.9782 Network found at resources/mnist/mnist-net.h5 in venus-1.0.1, retrieved 2021-04-08 from the VAS Group website. Converted to .onnx format via a Python conversion script courtesy Matthias König.
mnist-net_256x4.onnx 0.9764 VNN-Comp 2020 Benchmark network of the same name.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published