You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, I have an issue with the generated music quality after finetuning on the MAESTRO dataset. For more information, I was using the "REMI-tempo-chord-checkpoint as the base model and trained it for 5 epochs of the whole dataset. My hypothesis after reading the paper is because of the time signatures of songs in the MAESTRO dataset that is not supported by the REMI encoding method.
Do you have any insights about this problem?
The text was updated successfully, but these errors were encountered:
In my opinion, the main reason is the musical difference between pop music (my data) and classical music (MAESTRO). Such as time signatures you mentioned (usually 4/4 for pop music), time quantization scale (16 in our method, maybe we need to use a finer time-scale for classical music?)
You can try it by modifying the data pre-processing parameters!
Thank you for the amazing work on this project!
However, I have an issue with the generated music quality after finetuning on the MAESTRO dataset. For more information, I was using the "REMI-tempo-chord-checkpoint as the base model and trained it for 5 epochs of the whole dataset. My hypothesis after reading the paper is because of the time signatures of songs in the MAESTRO dataset that is not supported by the REMI encoding method.
Do you have any insights about this problem?
The text was updated successfully, but these errors were encountered: