A few questions about SDXL Lora Training #189
walkingclark
started this conversation in
General
Replies: 1 comment 1 reply
-
Hope that helps! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi holo, and guys,
Thanks greatly for responding to my posts. I didn't expect the response to be so soon.
I have a few questions about this SDXL Colab which often confuses me in different branches of this project. It would be very helpful if anyone could answer:
1. about install_dependencies:
Are the warnings or errors fatal to the results?
It usually involves the version of jax or opencv-python
Should I do anything in the future if I see these errors or warnings?
2. about Optimizers:
It is recommended to adopt AdamW8bit or Prodigy in the Colab. I wonder if there are suggested args for other optimizers. Just in case if I want to try them out some time to get away from issues or problems. Some said AdamW8bit may cause problems, I wonder if that's so.
3. about fp16/bp16
It is recommended to turn on "bp16" while using A100 GPU. However, in config_dict() > "saving_arguments" > "saving precision", it is written "fp16" statically. Should I change this accordingly?
4. The network_dim and network_alpha
It is recommended 8-4 in SDXL Colab, while it is recommended 16-8 to 32-16 in SD1.5. I wonder what are these numbers based on? Because I tried different numbers in SD1.5 (such as 16-4, 16-1, 64-32, etc.) and they work fine. But I don't know why it is limited to be under 32-32 in SDXL Colab.
5. How do I know if the Loras are "Full" or "Pruned"?
These may be silly questions, however, They troubled me for a long time and cost me tons of time and computing units. Some answers would be much appreciated.
Beta Was this translation helpful? Give feedback.
All reactions