You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now vLLM is still under test&dev and ONLY BASIC INFER is available. It will not be enabled be default and if you are a developer, you can easily find the way to enable it by looking the parameters of Chat.load.
Using vllm would lead to an error: ImportError: cannot import name 'LogicalTokenBlock' from 'vllm.block' (/root/miniconda3/envs/py39/lib/python3.9/site-packages/vllm/block.py)
How can i fix it?
Now vLLM is still under test&dev and ONLY BASIC INFER is available. It will not be enabled be default and if you are a developer, you can easily find the way to enable it by looking the parameters of Chat.load.
Does this mean that zero-shot infer is not supported yet?
The Readme only mentions 'pip install vllm...'
May I know how shall I proceed to enable vLLM?
Do I need to do anything after installing vLLM?
Or ChatTTS will automatically use vLLM when detecting it available?
Thank you in advance.
The text was updated successfully, but these errors were encountered: