Skip to content

Latest commit

 

History

History
191 lines (128 loc) · 7.35 KB

README.md

File metadata and controls

191 lines (128 loc) · 7.35 KB

Jetson

Measure power consumption and runtime for CNN models on the jetson device.

🔗 Quick Links

  1. Getting Started
  2. Approach
  3. Repository Structure
  4. Documentation

🛸 Getting Started

⚙️ Requirements

Jetson Nano Orion Development Kit - To run benchmarking experiments on a Jetson device for collecting power and runtime measurements for a CNN model.

Following is the configuration of software and tools on the Jetson device used for testing:

JetPack 6.1
Jetson Linux 36.4
Docker 27.3.1
OS - Ubuntu 22.04-based root file system

DagsHub account and a repository for data versioning.


🏎💨 Run Experiment Script

  1. To maximise the Jetson power and fan speed run the following command on Jetson.

    sudo nvpmodel -m 0
    sudo jetson_clocks
  2. Build the docker image

    sudo docker build -t edge-vision-benchmark -f Dockerfile.jetson .

Important

Use this exact Docker image to ensure compatibility with tensorrt==10.1.0 and torch_tensorrt==2.4.0.
Base image nvcr.io/nvidia/pytorch:24.06-py3-igpu might take some time to download on Jetson. (approx. 5 GB in size)

  1. Run the container to collect power and runtime measurements for the CNN models

    sudo docker run --runtime=nvidia --ipc=host -v $(pwd):/app -d edge-vision-benchmark

    This will start running the run_experiment.sh script by default. You can also override by passing your custom experiment script.

    More information on the run_experiment.sh can be found in the data collection on Jetson section.

    To follow the logs of the experiment, you can run the following command

    sudo docker logs -f <container-name>

    You can find the name of the docker container using the sudo docker ps command.

  2. Add DVC credentials to the Jetson as shown in the video below. Run the commands at the root of the project corresponding to the Add a DagsHub DVC remote and Setup credentials sections on the Jetson.

    $ pwd
    /home/username/edge-vision-power-estimation

  3. Upload benchmark data to DagsHub collected on Jetson from the root directory of the project.

    First, We create a new branch raw_data_v1. Please make sure to add a new branch for clarity.

    git checkout -b raw_data_v1

    Track raw_data folder using dvc add command

    dvc add jetson/power_logging/raw_data/prebuilt_models

    Next, run the following commands to track changes in Git. For example, we add a commit message Add raw data. Please make sure to add a good commit message for clarity.

    git add .dvc jetson/power_logging/raw_data/prebuilt_models.dvc
    git commit -m "Add raw data"

    Push both the data and new git branch to the remote

    dvc push -r origin
    git push origin raw_data_v1

    After the PR related to raw dataset is merged, a tag for that specific version of raw dataset should be created. To know more about tagging, refer to DVC tagging documentation.

Note

Learn more about the format of dataset collected in the raw dataset section.

Local development

To do local development (i.e. on your machine rather than Jetson) you need to set up development environment.

uv : It is used as default for running this project locally.

Create virtual environment using uv and install dependencies required for the project.

uv venv 
source .venv/bin/activate
uv sync

This setup should allow you to execute measurement scripts for local development purposes, e.g.:

python measure_inference_power.py \
--model "resnet18" \
--model-repo "pytorch/vision:v0.10.0" \
--warmup "1" --runs "3" \
--result-dir "raw_data/prebuilt_models/" \
--optimization-level 3 \
--input-shape 1 3 224 224

💡 Approach

The following process outlines the approach taken to collect the power and runtime values for each layer.

First, we measure the idle power value of the Jetson. This power value measures how much power is consumed when minimal required processes are running on the Jetson.

Caution

The recommendation is to disable any GUI operations and use command line interface on Jetson to reduce the number of background processes for getting the idle power.

Next, we run two separate process on Jetson wherein the first process runs the benchmarking for a CNN model. This process captures the per-layer runtime for the model. It converts a PyTorch model to a TensorRT model using TorchTensorRT library.

In the second process, we launch the power logging script. Two separate processes are used to ensure that the benchmarking and power logging tasks are performed concurrently without interference. This approach prevents the benchmarking process from being slowed down by the additional overhead of logging power measurements.

Finally, we upload the collection of power and runtime data for each model to DagsHub. This is the raw data that we will further preprocess to create training data. This dataset is versioned using DVC.

See the diagram below for a visual explanation:

jetson-power-logging

For more insights into how power is collected on Jetson, refer to the Power Consumption and Benchmarking on Jetson and the Behind the Scenes documentation.

For insights into how runtime is measured for each layer on Jetson, refer to this document.

📂 Repository Structure

.
├── assets
├── Dockerfile.jetson
├── docs
├── measure_idling_power.py
├── measure_inference_power.py
├── measure_power.py
├── model                       # Benchmarking utility functions
├── pyproject.toml
├── README.md
├── run_experiment.sh
└── uv.lock
  • measure_idling_power.py : This script measures average power usage when there Jetson is idle i.e. no benchmarking is being run.

  • measure_power.py : This scripts provides a function to read power values from INA3221 power monitor sensor on Jetson device.

  • run_experiment.sh : Experiment script that runs the power and runtime collection process end-to-end.

📚 Documentation

Here are few links to the relevant documentation for further readings.