Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test for mmdetection #77

Open
AleksKnezevic opened this issue Nov 27, 2024 · 4 comments
Open

Add test for mmdetection #77

AleksKnezevic opened this issue Nov 27, 2024 · 4 comments
Assignees

Comments

@AleksKnezevic
Copy link
Contributor

Add test for mmdetection as defined here

@ddilbazTT
Copy link
Contributor

I will be using this tutorial

@ddilbazTT
Copy link
Contributor

Some setup required that I will be recording here.

test_mmdetection.py:

from mmdet.apis import DetInferencer

# Choose to use a config
model_name = 'rtmdet_tiny_8xb32-300e_coco'
# Setup a checkpoint file to load
checkpoint = './checkpoints/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth'

# Initialize the DetInferencer
inferencer = DetInferencer('rtmdet_tiny_8xb32-300e_coco')

# Use the detector to do inference
img = './tests/models/mmdetection/demo.jpg'
result = inferencer(img, out_dir='./tests/models/mmdetection/output')

To run test_mmdetection.py:

pip install 'numpy<2.0'
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install --upgrade openmim
mim install 'mmcv==2.1.0'
pip install mmengine
pip install mmdet
pip install --upgrade jedi

To be able to use cmake and build:

sudo apt-get update
sudo apt-get install --only-upgrade cmake
sudo apt-get install gcc-10 g++-10
export CC=/usr/bin/gcc-10
export CUDAHOSTCXX=/usr/bin/g++-10
export CXX=/usr/bin/g++-10

Ran cmake as such: cmake -G Ninja -B build -DCMAKE_CUDA_COMPILER=$(which nvcc)

@ddilbazTT
Copy link
Contributor

@mmanzoorTT @AleksKnezevic @nsmithtt I am having some difficulty integrating mmdetecion with tt-mlir. Asif has been helping with conflicting paths and I decided to print the stablehlo graph through running the example with torch-xla.

The colab notebook which runs mmdetection example and generates its stablehlo graph has been shared with you. I attached the pdf view of how it looks. mmdetection-Colab.pdf

Input image: Image

Attached is the stabehlo graph in a txt file. This could be helpful in the meantime. I will work on creating hardware tests for these in mlir to get an idea of which ops are missing support. stablehlo_output.txt

@ddilbazTT
Copy link
Contributor

ddilbazTT commented Dec 13, 2024

I was wondering if anybody could give some recommendations on representative Point Pillars dataset format. I can run mmdetection3d:

import mmdet3d
from mmdet3d.apis import LidarDet3DInferencer
inferencer = LidarDet3DInferencer('pointpillars_kitti-3class')
inputs = dict(points='./000008.bin')
inferencer(inputs, out_dir="./output/")

I cannot use the inputs as is to export the graph. The following fails:

import torch
import mmdet3d
from mmdet3d.apis import LidarDet3DInferencer
from torch.export import export

inferencer = LidarDet3DInferencer('pointpillars_kitti-3class')
inputs = dict(points='./000008.bin')
inferencer(inputs, out_dir="./output/")

model = inferencer.model

model.eval()

exported_model = export(model.forward, (inputs, ))

with error:

InternalTorchDynamoError: voxels

from user code:
   File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/base.py", line 88, in forward
    return self._forward(inputs, data_samples, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/single_stage.py", line 136, in _forward
    x = self.extract_feat(batch_inputs_dict)
  File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/voxelnet.py", line 38, in extract_feat
    voxel_dict = batch_inputs_dict['voxels']

I tried to format the inputs, as such:

import torch
import mmdet3d
from mmdet3d.apis import LidarDet3DInferencer
from torch.export import export

inferencer = LidarDet3DInferencer('pointpillars_kitti-3class')
inputs = dict(points='./000008.bin')
inferencer(inputs, out_dir="./output/")

input_dict = {
    'voxels': torch.rand(200, 5, 4),
    'num_points': torch.randint(0, 5, (200,)),
    'coordinates': torch.randint(0, 100, (200, 3))
}

model = inferencer.model

# Prepare the model
model.eval()

exported_model = export(model.forward, (input_dict, ))

but this is also failing:

TorchRuntimeError: Failed running call_function <built-in function getitem>(*(FakeTensor(..., size=(200, 5, 4)), 'voxels'), **{}):
too many indices for tensor of dimension 3

from user code:
   File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/base.py", line 88, in forward
    return self._forward(inputs, data_samples, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/single_stage.py", line 136, in _forward
    x = self.extract_feat(batch_inputs_dict)
  File "/usr/local/lib/python3.10/dist-packages/mmdet3d/models/detectors/voxelnet.py", line 39, in extract_feat
    voxel_features = self.voxel_encoder(voxel_dict['voxels'],

How can I write a placeholder input dict that aligns with pointpillar format? If anybody could chime in, I would appreciate it.

Edit: model is here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants