You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i use ETTh1.csv dataset to finetune moirai_1.1_small model, The fine-tuning process went smoothly. and i got the ckpt file name like 'epoch=0-step=100.ckpt'. ,then i use the ckpt file to run eval.py,and got a large MSE ,28.866286. and forcast output look bad .
So I was wondering if the large fluctuations in the raw data were the cause. I applied three different normalization methods to preprocess the raw dataset, then reran the eval.py file and achieved a much better MSE value.
My question is: Is there an officially recommended method for data normalization? Thank you!
The text was updated successfully, but these errors were encountered:
Hi @yAoOw, I think 100 step is insufficient for fine-tuning. Could you decrease the learning rate and increase the batch size. In the default pipeline, the converge termination creterion is based on the validatation performance. Could you check the MSE peformance on validation set about before/after fine-tuning?
In terms of normalization, morai model include the instance normalization. But, LSF benchmark require another dataset-level normalization, which can be found in
Hi @yAoOw. For normalization, I think you can follow the standard normalization process for LSF dataset. See discussion here: #31 (comment). You can implement that approach based on the codes @chenghaoliu89 shared above.
In my experience, this normalization method works well. However, since Moirai uses large context length, finetuning on small datasets like Etth1 and Etth2 is quite tricky (easy to get overfitted). You can try with Ettm1, Ettm2 and Weather, using small lr like 1e-6~1e-7.
i use ETTh1.csv dataset to finetune moirai_1.1_small model, The fine-tuning process went smoothly. and i got the ckpt file name like 'epoch=0-step=100.ckpt'. ,then i use the ckpt file to run eval.py,and got a large MSE ,28.866286. and forcast output look bad .
So I was wondering if the large fluctuations in the raw data were the cause. I applied three different normalization methods to preprocess the raw dataset, then reran the
eval.py
file and achieved a much better MSE value.My question is: Is there an officially recommended method for data normalization? Thank you!
The text was updated successfully, but these errors were encountered: