Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named scripts.inference #210

Closed
ffhelly opened this issue Oct 21, 2024 · 6 comments
Closed

No module named scripts.inference #210

ffhelly opened this issue Oct 21, 2024 · 6 comments

Comments

@ffhelly
Copy link

ffhelly commented Oct 21, 2024

python -m scripts.inference --inference_config configs/inference/test.yaml

error message : No module named scripts.inference

but , i used pip install scripts , and successed this install.

its same error message。

@vincentWuK
Copy link

vincentWuK commented Oct 22, 2024

cd scripts and create empty _ _ init _ _ .py

@vincentWuK
Copy link

or just try to run it by python scripts/inference --inference_config configs/inference/test.yaml

@ffhelly
Copy link
Author

ffhelly commented Oct 22, 2024

or just try to run it by python scripts/inference --inference_config configs/inference/test.yaml

thx,bro.

I tried running the program using this method and eventually found that my MMCV framework doesn't seem to match. My current CUDA version is 12.3, but the MMCV prompt version is 11. Is it necessary to downgrade CUDA?

The detected CUDA version (12.3) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.

@vincentWuK
Copy link

or just try to run it by python scripts/inference --inference_config configs/inference/test.yaml

thx,bro.

I tried running the program using this method and eventually found that my MMCV framework doesn't seem to match. My current CUDA version is 12.3, but the MMCV prompt version is 11. Is it necessary to downgrade CUDA?

The detected CUDA version (12.3) mismatches the version that was used to compile PyTorch (11.7). Please make sure to use the same CUDA versions.

you can first try to use conda for managing cuda environment. if not works, try to use devcontainer in vscode.

@vincentWuK
Copy link

conda create -> conda activate -> python3 -m pip install pytorch and pytorch-cuda, ....

@ffhelly
Copy link
Author

ffhelly commented Oct 22, 2024

conda create -> conda activate -> python3 -m pip install pytorch and pytorch-cuda, ....

Do you mean to create a new conda environment, point to the corresponding pytorch version (11), and then deploy it?
Will the 12.2 version I saw in Nvidia smi conflict with the current new conda version environment? bro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants