You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Is this request related to a challenge you're experiencing? Tell us your story.
I would like to understand the process and feasibility of fine-tuning an entire model instead of using LoRA (Low-Rank Adaptation). While LoRA is great for parameter-efficient fine-tuning, in my case, I am exploring scenarios where I need to fine-tune the entire model to achieve better control over its performance.
2. What is your suggested solution?
I propose modifying the training code as follows:
model=BaseTransformer.from_pretrained(
path="path_to_your_pretrained_model",
load_weights=True,
lora_config=None, # <-- Pass None here
)
1. Is this request related to a challenge you're experiencing? Tell us your story.
I would like to understand the process and feasibility of fine-tuning an entire model instead of using LoRA (Low-Rank Adaptation). While LoRA is great for parameter-efficient fine-tuning, in my case, I am exploring scenarios where I need to fine-tune the entire model to achieve better control over its performance.
2. What is your suggested solution?
I propose modifying the training code as follows:
Then, execute the training command:
If training without LoRA, is it necessary to convert the LoRA weights back to regular weights?
What steps should I follow before inference?
3. Additional context or comments
No response.
4. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: