-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for LoRAConfig. #2212
Comments
To explain further: The default implementation initializes the LoRA A parameter randomly and the LoRA B parameter to zeros. This results in LoRA being an identity transform at initialization, which can help with training. When setting For real LoRA training, you almost never want that, which is why we discourage it. However, the weights are not initialized as random memory as in |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
Thanks. Is it possible to update the docs to explain what you've just said? I think this should be clear in the docs. |
peft/src/peft/tuners/lora/config.py
Lines 112 to 113 in 162d7e5
Documentation for
False
is not clear. Presumably 'completely random' means the arrays will be uninitialized and hence contain whatever the contents of the relevant memory locations are?The text was updated successfully, but these errors were encountered: