- Contacts:
- Ben Hawks, email: bhawks@fnal.gov, GitHub: @ben-hawks
- Nhan Tran, email: ntran@fnal.gov, GitHub: @nhanvtran
- Javier Duarte, email: jduarte@ucsd.edu, GitHub: @jmduarte
- Giuseppe DiGuglielmo, email: giuseppe.diguglielmo@columbia.edu, GitHub: @GiuseppeDiGuglielmo
- Team members:
- Nicolò Ghielmetti, CERN
- Jules Muhizi, Fermilab/Harvard
- Shvetank Prakash, Columbia/Harvard
- Rushil Roy, UCSD
- Board is a TUL PYNQ-Z2 based on Xilinx Zynq SoC (See https://www.tul.com.tw/productspynq-z2.html for more information).
The code is structured as follows
hls4ml
├── code
│ ├── ad
│ │ └── AD03
│ │ ├── inference
│ │ │ ├── hls
│ │ │ ├── sdk
│ │ │ ├── sys
│ │ │ └── utils
│ │ └── training
│ │ ├── convert.py
│ │ ├── model
│ │ │ └── ad03
│ │ │ └── model_ToyCar.h5
│ │ ├── train.py
│ └── ic
│ └── RN06
│ ├── inference
│ │ ├── hls
│ │ ├── sdk
│ │ ├── sys
│ │ └── utils
│ └── training
│ ├── convert.py
│ ├── resnet_v1_eembc_RN06
│ │ └── model_best.h5
│ └── train.py
├── results
│ └── pynqz2
│ ├── ad
│ │ ├── accuracy
│ │ └── performance
│ └── ic
│ ├── accuracy
│ └── performance
└── systems
- For both the anomaly detection model (AD03) and the image classification model (RN06), there are
training
andinference
subdirectories. - Under
training
, there are scripts to train the model withQKeras
(train.py
) and convert it to a Xilinx HLS/Vivado/SDK project usinghls4ml
(convert.py
). - The configruation is controlled by
yml
files. - For convenience, the pretrained models in
.h5
format are provided in the repository as indicated. - Under
inference
, the Xilinx HLS, Vivado, and SDK projects will be automatically created after successfully runningconvert.py
in thehls
,sys
, andsdk
folders respectively.
- Install miniconda from here: https://docs.conda.io/en/latest/miniconda.html
- Create the environment:
conda-env create -f environment.yml
- Activate the environment:
conda activate tiny-mlperf-env
- Install Vivado 2019.1 from https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/vivado-design-tools/archive.html
- Download PYNQ-Z2 board files (from https://dpoauwgwqsy2x.cloudfront.net/Download/pynq-z2.zip) and install appropriately by extracting and copying files to:
<path_to_Vivado>/Vivado/2019.1/data/boards/board_files
- Setup Vivado 2019.1:
source <path_to_Vivado>/Vivado/2019.1/settings64.sh
- Ensure PYNQ-Z2 board is connected (and powered) by USB and visible.
In this step, you will download the dataset and perform a quantization-aware training with QKeras.
- Change directory
cd code/ad/AD03/training/
- Download dataset for AD03:
./get_dataset.sh
- Train AD03, pretrained model is provided as
model/ad03/model_ToyCar.h5
:
python train.py -c AD03.yml
n.b. if you don't have a GPU, you can comment out the import setGPU
(true also for later python scripts)
- Change directory
cd code/ic/RN06/training/
- Train RN06, pretrained model is provided as
resnet_v1_eembc_RN06/model_best.h5
:
python train.py -c RN06_pynqz2.yml
In this step, you will ingest the quantization-aware training performed in the previous step and convert it to firmware using hls4ml. The hls4ml configuration, pynqz2.yml
has details such as the implementation architecture.
- Change directory
cd code/ad/AD03/training/
- Get test data:
python generate_test_data.py -c AD03.yml
- Convert AD03:
python convert.py -c pynqz2.yml
- Change directory
cd code/ic/RN06/training/
- Get test data:
source get_test_data.sh
- Convert RN06:
python convert.py -c RN06_pynqz2.yml
- Change directory
cd code/ic/<model_name>/inference/sdk/
- Open Xilinx SDK GUI
make gui
- Program the FPGA with the bit file in SDK
- Run test harness software in SDK
- Download EEMBC runner GUI and AD/IC benchmark datasets (See https://github.com/eembc/ulpmark-ml)
- Open EEMBC runner GUI and and perform measurements, follow the instructions on the eembc README
The PYNQ--Z2 supports Quad SPI Flash. Please follow these instructions to program and boot from the Flash memory.