Skip to content

Latest commit

 

History

History
52 lines (36 loc) · 2.07 KB

README.md

File metadata and controls

52 lines (36 loc) · 2.07 KB

OpenVINO™ AUTO Plugin

The main responsibility of the AUTO plugin is to provide a unified device that enables developers to code deep learning applications once and deploy them anywhere.

Other capabilities of the AUTO plugin include:

  • Static device selection, which intelligently compiles a model to one device or multiple devices.
  • CPU acceleration to start inferencing while the target device is still compiling the model.
  • Model priority support for compiling multiple models to multiple devices.

The component is written in C++. If you want to contribute to the AUTO plugin, follow the common coding style rules.

Key contacts

In case of any questions, review and merge requests, contact the AUTO Plugin maintainer group.

Components

The AUTO plugin follows the OpenVINO™ plugin architecture and consists of several main components:

  • docs contains developer documentation for the AUTO plugin.
  • src - folder contains sources of the AUTO plugin.
  • tests - tests for Auto Plugin components.

Learn more in the OpenVINO™ Plugin Developer Guide.

Architecture

The diagram below shows an overview of the components responsible for the basic inference flow:

flowchart TD

    subgraph Application["Application"]
    end

    subgraph OpenVINO Runtime["OpenVINO Runtime"]
        AUTO["AUTO Plugin"] --> CPU["CPU Plugin"]
        AUTO["AUTO Plugin"] --> GPU["GPU Plugin"]
    end

    Application --> AUTO

    style Application fill:#6c9f7f
Loading

Find more details in the AUTO Plugin architecture document.

Tutorials

See also