-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unloading multiple loras: norms do not return to their original values #10745
Comments
Should it not already take care of it? diffusers/src/diffusers/loaders/lora_pipeline.py Line 1894 in 464374f
What am I missing? Additionally, the following test does ensure its effectivity: diffusers/tests/lora/test_lora_layers_flux.py Line 661 in 28f48f4
|
Ah, is it possible to call load_lora_weights() multiple times on a pipeline to load multiple weights ? does it unload in between to restore original weights ? |
If you don’t call |
so in that case of multiple calls to load_lora_weights(), the attribute |
If you're loading Control LoRA and want to keep it with others, the norm layer values that came with the Control LoRA will remain like so. Otherwise, the effectiveness of Control LoRA won't fully be there. Or am I misinterpreting the core use case here? |
When unloading from multiple loras on flux pipeline, I believe that the norm layers are not restored here.
Shouldn't we have:
The text was updated successfully, but these errors were encountered: