-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why set_lora_device doesn't work #9913
Comments
The reproduction seems very incomplete. Can you please provide a fuller reproduction? Also what versions of |
I need to load multiple Loras and switch between different Loras. Each time, I load the used Lora onto the GPU through set_lora_device, while the unused ones are loaded onto the CPU. When initializing these Loras, after load_lora_weight, they are uniformly loaded onto the CPU, as shown in the code:
actually, it does save much GPU memory, but GPU memory continues to grow slowly. I understand that after calling pipe.set_lora_device([adapter], 'cpu'), GPU VRAM should not grow before: diffusers and peft version: |
There's some issues. If different Loras are loaded, some Loras contain text_encoder, while others do not, only contain unet, then set_lora_device () will report an key error
Need to determine if the key is in the module.lora_A and module.lora_B
|
Can you try with a more recent version of PyTorch and
Yeah this seems right. This also seems like a different issue. Would you maybe like to open a PR for this? Cc: @BenjaminBossan |
Describe the bug
When I load serverl loras with set_lora_device(), the GPU memory continues to grow, cames from 20G to 25G, this function doesn't work
Reproduction
for key in lora_list:
weight_name = key + ".safetensors"
pipe.load_lora_weights(lora_path, weight_name=weight_name, adapter_name=key, local_files_only=True)
adapters = pipe.get_list_adapters()
print(adapters)
pipe.set_lora_device([key], torch.device('cpu'))
Logs
No response
System Info
V100 32G
diffusers 0.32.0.dev0
torch 2.0.1+cu118
peft 0.12.0
Who can help?
No response
The text was updated successfully, but these errors were encountered: