Get started with OpenVINO™ Test Drive, an application that allows you to run generative AI and vision models trained by Intel® Geti™ directly on your computer or edge device using OpenVINO™ Runtime.
With use of OpenVINO™ Test Drive you can:
- Chat with LLMs and evaluating model performance on your computer or edge device
- Experiment with different text prompts to generate images using Stable Diffusion and Stable DiffusionXL models (coming soon)
- Transcribe speech from video using Whisper models, including generation of timestamps (coming soon)
- Run and visualize results of models trained by Intel® Geti™ using single image inference or batch inference mode
Download the latest release from the Releases repository.
Note
To verify downloaded file integrity, you can generate a SHA-256 of the downloaded file and compare it to the SHA-256 from corresponding .sha256
file published in Releases repository.
Installation on Windows
- Downloading the zip archive Releases repository
Windows
folder .
- Extract zip archive double-click the MSIX installation package, click
Install
button and it will display the installation process
- Click on the application name on Windows app list to launch OpenVINO™ Test Drive.
Upon starting the application, you can import a model using either Hugging Face for LLMs or upload Intel® Geti™ models from local disk.
- Find a model on Hugging Face and import it
- Chat with LLMs via
Playground
tab.
- Use
Performance metrics
tab to get model performance metrics on your computer or edge device
- Download deployment code for the model in OpenVINO format trained by Intel® Geti™.
Note
Please check Intel® Geti™ documentation for more details.
- Import deployment code into OpenVINO™ Test Drive using
Import model
->Local disk
button.
- Run and visualize results of inference on individual images using
Live inference
tab.
- For batch inference, use
Batch inference
tab and provide paths to folder with input images in aSource folder
and specifyDestination folder
for output batch inference results. Click onStart
to start batch inference.
The application requires the flutter SDK and the dependencies for your specific platform to be installed.
Secondly, the bindings and its dependencies for your platform to be added to ./bindings
.
- Install flutter sdk. Make sure to follow the guide for flutter dependencies.
- Build the bindings and put them to
./bindings
folder. OpenVINO™ Test Drive uses bindings to OpenVINO™ GenAI and OpenVINO™ Vision ModelAPI located in./openvino_bindings
folder. See readme for more details. - Once done you can start the application:
flutter run
- OpenVINO™ - software toolkit for optimizing and deploying deep learning models.
- GenAI Repository and OpenVINO Tokenizers - resources and tools for developing and optimizing Generative AI applications.
- Intel® Geti™ - software for building computer vision models.
- OpenVINO™ Vision ModelAPI - a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures.
For those who would like to contribute to the OpenVINO™ Test Drive, please check out Contribution Guidelines for more details.
OpenVINO™ Test Drive repository is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.