Skip to content

Fully support mv2/ mv3 with fused ops + cmsis-integration on FVP #13701

@psiddh

Description

@psiddh

🚀 The feature, motivation and pitch

Infer / Run mv2 and mv3 on FVP with fused ops + cmsis-nn integration.

To fully support it, we need to support the following essential ops

  1. Op Support

Quantized Arithmetic Operations
cortex_m::quantized_add.out — Element-wise addition using ARM CMSIS-NN function arm_elementwise_add_s8()
cortex_m::quantized_mul.out — Element-wise multiplication

Neural Network Layers / MatMul
cortex_m::quantized_conv2d.out — Convolution using arm_convolve_s8()
cortex_m::quantized_linear_per_tensor_out — Fully connected layer using arm_fully_connected_s8()
cortex_m::quantized_avg_pool2d.out — Average pooling
cortex_m::quantized_max_pool2d.out — Max pooling

  • Convolution (conv2d)
  • Depthwise Convolution (depthwise_conv2d)

Activation Functions
cortex_m::quantized_relu.out — ReLU activation
cortex_m::quantized_hardtanh.out — Clamped activation function

  1. E2E integration of running mobile vision models

  2. Performance full fledged Analysis w.r.t TFLM and Benchmarking

Alternatives

No response

Additional context

No response

RFC (Optional)

No response

Sub-issues

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

Status

To triage

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions