Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov8-obb模型转onnx之后再转rknn时量化出错,求大佬解惑 #261

Open
mozeqiu123 opened this issue Jan 16, 2025 · 3 comments
Open

Comments

@mozeqiu123
Copy link

1,模型用yolov8obb重新训练了自己的模型,只做一个类别的识别。模型不管是bs=1,还是16训练出来的,都会出现这个问题
2,成功转成onnx模型
3,在转rknn时出错如下
4,rknn_model_zoo是git的最新版,我没搞明白为啥会出现这种维度错误,我在ret = rknn.build(do_quantization=do_quant, dataset=DATASET_PATH,rknn_batch_size=1)设置了 bs=1 但是也没用。

`rknn_model_zoo/examples/yolov8_obb/python# python convert.py best.onnx rk3588
I rknn-toolkit2 version: 2.3.0
--> Config model
done
--> Loading model
I Loading : 100%|██████████████████████████████████████████████| 156/156 [00:00<00:00, 75294.76it/s]
done
--> Building model
I OpFusing 0: 100%|██████████████████████████████████████████████| 100/100 [00:00<00:00, 310.62it/s]
I OpFusing 1 : 100%|█████████████████████████████████████████████| 100/100 [00:00<00:00, 186.97it/s]
I OpFusing 0 : 100%|█████████████████████████████████████████████| 100/100 [00:00<00:00, 127.04it/s]
I OpFusing 1 : 100%|█████████████████████████████████████████████| 100/100 [00:00<00:00, 116.26it/s]
I OpFusing 0 : 100%|██████████████████████████████████████████████| 100/100 [00:01<00:00, 97.57it/s]
I OpFusing 1 : 100%|██████████████████████████████████████████████| 100/100 [00:01<00:00, 94.66it/s]
I OpFusing 2 : 100%|██████████████████████████████████████████████| 100/100 [00:06<00:00, 14.70it/s]
I GraphPreparing : 100%|████████████████████████████████████████| 210/210 [00:00<00:00, 9456.87it/s]
I Quantizating 1/4: 0%| | 0/210 [00:00<?, ?it/s]E build: The input('/rknn_model_zoo/datasets/COCO/testimg/001.jpg') shape (1, 640, 640, 3) is wrong, expect 'nhwc' like (16, 640, 640, 3)!
I ===================== WARN(0) =====================
E rknn-toolkit2 version: 2.3.0
I Quantizating 1/4: 0%| | 0/210 [00:00<?, ?it/s]
E build: Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1971, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/rknn_base.py", line 175, in rknn.api.rknn_base.RKNNBase._quantize
File "rknn/api/quantizer.py", line 1417, in rknn.api.quantizer.Quantizer.run
File "rknn/api/quantizer.py", line 899, in rknn.api.quantizer.Quantizer._get_layer_range
File "rknn/api/rknn_utils.py", line 328, in rknn.api.rknn_utils.get_input_img
File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: The input('rknn_model_zoo/datasets/COCO/testimg/001.jpg') shape (1, 640, 640, 3) is wrong, expect 'nhwc' like (16, 640, 640, 3)!

I ===================== WARN(0) =====================
E rknn-toolkit2 version: 2.3.0
Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1971, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/rknn_base.py", line 175, in rknn.api.rknn_base.RKNNBase._quantize
File "rknn/api/quantizer.py", line 1417, in rknn.api.quantizer.Quantizer.run
File "rknn/api/quantizer.py", line 899, in rknn.api.quantizer.Quantizer._get_layer_range
File "rknn/api/rknn_utils.py", line 328, in rknn.api.rknn_utils.get_input_img
File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: The input('rknn_model_zoo/datasets/COCO/testimg/001.jpg') shape (1, 640, 640, 3) is wrong, expect 'nhwc' like (16, 640, 640, 3)!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "convert.py", line 60, in
ret = rknn.build(do_quantization=do_quant, dataset=DATASET_PATH)
File "/root/miniforge3/envs/RKNN-Toolkit2/lib/python3.8/site-packages/rknn/api/rknn.py", line 192, in build
return self.rknn_base.build(do_quantization=do_quantization, dataset=dataset, expand_batch_size=rknn_batch_size)
File "rknn/api/rknn_log.py", line 349, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1971, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/rknn_base.py", line 175, in rknn.api.rknn_base.RKNNBase._quantize
File "rknn/api/quantizer.py", line 1417, in rknn.api.quantizer.Quantizer.run
File "rknn/api/quantizer.py", line 899, in rknn.api.quantizer.Quantizer._get_layer_range
File "rknn/api/rknn_utils.py", line 328, in rknn.api.rknn_utils.get_input_img
File "rknn/api/rknn_log.py", line 95, in rknn.api.rknn_log.RKNNLog.e
ValueError: The input('rknn_model_zoo/datasets/COCO/testimg/001.jpg') shape (1, 640, 640, 3) is wrong, expect 'nhwc' like (16, 640, 640, 3)!`

@mozeqiu123
Copy link
Author

mozeqiu123 commented Jan 16, 2025

我是按照这里面的方法转的onnx模型,成功转出来了
https://github.com/airockchip/ultralytics_yolov8/blob/main/RKOPT_README.zh-CN.md
输出信息如下,我看到了原始维度就是这个 (16, 3, 640, 640),是不是要在转换的时候调整输入维度到 (1, 3, 640, 640)再转呀
` python .\exporter.py
Ultralytics 8.3.27 🚀 Python-3.10.13 torch-2.5.1+cpu CPU (13th Gen Intel Core(TM) i9-13900KF)
YOLOv8n-obb summary (fused): 187 layers, 3,080,144 parameters, 0 gradients, 8.3 GFLOPs

PyTorch: starting from 'yolov8n-obb.pt' with input shape (16, 3, 640, 640) BCHW and output shape(s) (16, 20, 8400) (6.3 MB)

ONNX: starting export with onnx 1.14.0 opset 19...
ONNX: slimming with onnxslim 0.1.47...
ONNX: export success ✅ 1.2s, saved as 'yolov8n-obb.onnx' (11.9 MB)

Export complete (2.3s)
Results saved to D:\lilin\ultralytics_yolov8\ultralytics\engine
Predict: yolo predict task=obb model=yolov8n-obb.onnx imgsz=640
Validate: yolo val task=obb model=yolov8n-obb.onnx imgsz=640 data=runs/DOTAv1.0-ms.yaml
Visualize: https://netron.app
`
有没有大佬看看是什么问题呀

@fatcatkk
Copy link

我之前跟你遇到了同样的问题,你看看你的default.yaml的# Export settings是不是把format改了,我之前改成了onnx然后就出这个错误了

@mozeqiu123
Copy link
Author

我之前跟你遇到了同样的问题,你看看你的default.yaml的# Export settings是不是把format改了,我之前改成了onnx然后就出这个错误了

我是根据这里说的转onnx模型的。只改了自己的模型路径。后面我发现在转换rknn的时候,不进行量化就能转出来,目前正在测试这个转出的rknn模型能否使用。

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants