- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can not run my CNN model on AMD NPU with Onnxruntime VitisAI EP
Hello,
I was recently trying to run a CNN model on an NPU, and I followed the official procedure and successfully ran sample: getting_started_resnet, but got an error when using my own model. The error was reported when loading the model after I quantized it. My model is a simple CNN model without any special custom operators.
Can someone help me solve this problem?
Thanks a lot.
- Labels:
-
Ryzen AI
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there someone know this issue?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your issue likely stems from model compatibility, quantization errors, or unsupported operations in Vitis AI EP. Try these steps:
Check ONNX Model – Run onnx.checker.check_model("your_model.onnx") to verify validity.
Ensure Proper Quantization – Use supported quantization methods like QDQ format.
Confirm ONNX Runtime & Vitis AI Versions – Run python -c "import onnxruntime; print(onnxruntime.get_device())".
Enable Verbose Logging – Load with InferenceSession("your_model.onnx", providers=["VitisAIExecutionProvider"]) for error details.
Test Without Quantization – Run your model on CPU to isolate issues.
Check Supported Operators – Ensure all layers are compatible with Vitis AI EP.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It seems you are running in C++ flow. I suggest you try python flow first, that can ensure you have installation and everything right. If you like to run using C++ flow, do try Getting Started Resnet tutorial C++. That can show all the required DLL files will be required.
Your error seems like you dont have correct setup/all required files, so try Getting Started Tutorial C++ flow first.
Thanks
