Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning without LoRA #832

Open
1 task
bensonbs opened this issue Jan 16, 2025 · 2 comments
Open
1 task

Fine-tuning without LoRA #832

bensonbs opened this issue Jan 16, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@bensonbs
Copy link

bensonbs commented Jan 16, 2025

1. Is this request related to a challenge you're experiencing? Tell us your story.

I would like to understand the process and feasibility of fine-tuning an entire model instead of using LoRA (Low-Rank Adaptation). While LoRA is great for parameter-efficient fine-tuning, in my case, I am exploring scenarios where I need to fine-tune the entire model to achieve better control over its performance.


2. What is your suggested solution?

I propose modifying the training code as follows:

model = BaseTransformer.from_pretrained(
    path="path_to_your_pretrained_model",
    load_weights=True,
    lora_config=None,  # <-- Pass None here
)

Then, execute the training command:

python fish_speech/train.py --config-name text2semantic_finetune \\
 project=$project \\

If training without LoRA, is it necessary to convert the LoRA weights back to regular weights?

What steps should I follow before inference?


3. Additional context or comments

No response.


4. Can you help us with this feature?

  • I am interested in contributing to this feature.
@bensonbs bensonbs added the enhancement New feature or request label Jan 16, 2025
@bensonbs bensonbs reopened this Jan 16, 2025
@PoTaTo-Mika
Copy link
Collaborator

pr welcome for full params finetune.

@abhisirka2001
Copy link

Could you do full param finetuning of fishspeech?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants