-
Notifications
You must be signed in to change notification settings - Fork 40
Embedded Platform Evaluation
The evaluations are performed using the test-model.py file. This file loads the images from a given number of epoch ids, and feeds all images (up to a maximum) to trained TensorFlow model(s) defined by the 'model_load_file' value(s) from 'params.py'. For each image, the execution time of the different control loop steps (image capture, preprocessing, etc.) are recorded and printed while images are being fed to the model. When all images have been processed (or the max number of frames to be processed has been reached), the statistics (average, standard deviation, etc.) will be printed to the console.
By default, the platforms are tested over epoch 6 (out-video-6.avi), but the epochs processed can be changed by altering epoch_ids in test-model.py:
epoch_ids = [...] #Replace ... with all epochs to be processed
Also, epochs can be processed more than once (i.e. epoch_ids = [6,6] would have the platform process epoch 6 twice).
The maximum number of frames to be processed can be increased/decreased as well by changing:
NFRAMES = _ #Replace _ with the total number of frames to process
Before running evaluations, the following steps should be taken:
Create the directory where all test results will be stored:
$ mkdir datafiles
Turn off lightdm:
$ sudo service lightdm stop
Run the appropriate scripts for maximizing performance:
$ ./scripts/maxperf.sh #Raspberry Pi 3 Only
$ ./scripts/jetson-clocks.sh #NVIDIA TX2 Only
For convenience, the platforms can be fully tested by running the following scripts:
Raspberry Pi 3 (Intel UP Board):
$ ./scripts/model-tests.sh #Should also work on the Intel UP Board
$ ./scripts/memguard-tests.sh
$ ./scripts/palloc-tests.sh
NVIDIA Jetson TX2:
$ ./scripts/tx2-tests.sh
All scripts will create a datafiles/Dataset-temp directory that will contain all of the experiment logs. It should be renamed before running another script or its contents may be partially, or completely, overwritten.
*For the multimodel tests done in the 'model-tests.sh' script, make sure that at least four trained models exist and that all 'model_load_file' values in 'params.py' are different so that the same model(s) aren't used multiple times.