- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to convert from NCHW format to NHWC format
When I quantized the model (RTMDet-Ins-m) downloaded from mmdetection with VitisAI 3.5, the computation time was very slow because a transpose layer (computed by the CPU) was generated for all CNN layers.
I think that the transpose layer was generated because the XIR model is NHWC, while the Pytorch model is in NCHW format, but is this inference correct?
Also, I want to convert from NCHW format to NHWC format so that the transpose layer is not generated, how can I do that? Please let me know if there is a valid conversion program.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This thread is more suite for Xilinx Support/Forum: https://support.xilinx.com/s/?language=en_US
Also since it deals with AI this AMD Forum might be of help: https://community.amd.com/t5/ai/ct-p/amd_ai
Since you are using Pytorch I would open a thread or ticket there also: https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html
From the last link to Pytorch here is where you can open a thread: https://pytorch.org/#community-module
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you.
I will take it in consideration .
