-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LTXVPromptEnhancer: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select) #119
Comments
@Skol600ml Hey there! I ran into the same "Expected all tensors to be on the same device" error. Here’s how I fixed it by explicitly moving models and images to the same device. Just add the lines marked with # <-- add this line: In the prompt enhancer loader (e.g., down_load_llm_model and down_load_image_captioner): def down_load_llm_model(self, llm_name, load_device): def down_load_image_captioner(self, image_captioner, load_device): And in the enhance() method: def enhance(...):
|
Thank you, but I don't know in which file you modified them |
nevermind as it only works once on my sideSadly the solution only works once and then on a new generation gives the same error. My recommendation is to use Ollama Generate node for the prompt enhancement until the problem is fixed. Doesnt work as suppose to.
This is all done in the Prompt_enhancer_node.py file Final code should look like:
The last part looks like this:
|
Hello all,
|
Unbelievable - I just came here to Issues to find an answer you just just solved this 7 minutes before I visited. I tested this and it fixed the issue for me! Edit For those who may be code illiterate, |
you saved me a lot of time ! thank u |
The node is very cool, it works sometimes, but most of the times i get this error. Sometimes it works if you insist
The text was updated successfully, but these errors were encountered: