Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] mlc-llm benchmark failed with ROCm #36

Open
alexhegit opened this issue Jan 17, 2024 · 1 comment
Open

[BUG] mlc-llm benchmark failed with ROCm #36

alexhegit opened this issue Jan 17, 2024 · 1 comment

Comments

@alexhegit
Copy link

I follow the steps by steps for rocm-benchmark. The last step run failed. The log is bellow.

$python -m mlc_chat.cli.benchmark --model ${PATH_TEST}/params --device "rocm:0" --prompt "What is the meaning of life?" --generate-length 256

The key words of error log is "[2024-01-17 08:53:37] ERROR model_metadata.py:93: FAILED to read metadata section in legacy model lib."

mlc-llm-rocm-bm-failed

@alexhegit alexhegit changed the title rocm benchmark failed mlc-llm benchmark failed with ROCm Jan 17, 2024
@alexhegit alexhegit changed the title mlc-llm benchmark failed with ROCm [BUG] mlc-llm benchmark failed with ROCm Jan 17, 2024
@awz
Copy link

awz commented Sep 30, 2024

Any updates on this? I have a system with 3x w7900 + 1xw6800 and it works fine with llama.cpp hoping to get 4x w7900 working soon and try that in mlc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants