AI Inference SDK Integration Guide
An onboarding guide and model conversion reference for running AI inference on embedded devices.
Supported Frameworks
- TensorFlow Lite → Rockchip NPU (RV1126B)
- ONNX Runtime → DRP-AI (RZ/V2H)
- OpenCV DNN → CPU fallback
Model Conversion Flow
- Export the trained model (TensorFlow / PyTorch)
- Optimize model size via quantization (INT8/FP16)
- Compile for NPU/DRP-AI target
- Load the model via the SDK API and run inference
API Reference
int csun_ai_load_model(const char *model_path, csun_model_handle_t *handle);int csun_ai_infer(csun_model_handle_t *handle, csun_tensor_t *input, csun_tensor_t *output);Supported Models
- Object Detection: YOLOv5/v8, MobileNet SSD
- Classification: ResNet, EfficientNet
- Segmentation: DeepLabV3