Tengine is a lite, high performance, modular inference engine for embedded device
-
Updated
Sep 15, 2024 - C++
Tengine is a lite, high performance, modular inference engine for embedded device
Efficient Inference of Transformer models
Free TPU for FPGA with compiler supporting Pytorch/Caffe/Darknet/NCNN. An AI processor for using Xilinx FPGA to solve image classification, detection, and segmentation problem.
An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Samples code for world class Artificial Intelligence SoCs for computer vision applications.
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
Easy usage of Rockchip's NPUs found in RK3588 and similar chips
hardware design of universal NPU(CNN accelerator) for various convolution neural network
YoloV5 NPU for the RK3566/68/88
Simplified AI runtime integration for mobile app development
Advanced driver-assistance system with Google Coral Edge TPU Dev Board / USB Accelerator, Intel Movidius NCS (neural compute stick), Myriad 2/X VPU, Gyrfalcon 2801 Neural Accelerator, NVIDIA Jetson Nano and Khadas VIM3
YoloV8 NPU for the RK3566/68/88
Kotlin bindings for Edgerunner
EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
NPUsim: Full-Model, Cycle-Level, and Value-Aware Simulations of NPU Accelerators
Add a description, image, and links to the npu topic page so that developers can more easily learn about it.
To associate your repository with the npu topic, visit your repo's landing page and select "manage topics."