You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I propose that the point forecast of N-BEATS be included in this repo for all the dataset evaluated in the original paper like it has been done for the M4 competition github. This would ease the comparison of the N-beats model with others and increase the visibility of the N-BEATS paper.
Beside reproducing the experimental results presented in the paper, it is often more convenient to rely on precomputed forecast to compare different model on the same dataset. For instance, the M4 competition's github repository provides the point forecast of all the models submitted in compressed csv files which permits comparing the models per individual series. We can evaluate the performance of ES-RNN and FFORMA using different loss functions but not the N-beats model. Thankfully, this is the case for ES-RNN and FFORMA given the execution time to reproduce their forecasts. It would be great if N-BEATS wouldn't fall under the reproducibility exception. The argument holds for the other dataset evaluated.
Great repo btw!
The text was updated successfully, but these errors were encountered:
PhilippeChatigny
changed the title
Increase N-beats reproducibility by providing pre-computed forecast on dataset
Increase N-beats reproducibility by providing pre-computed forecast on evaluated datasets in the original paper
Sep 22, 2020
Hi,
I propose that the point forecast of N-BEATS be included in this repo for all the dataset evaluated in the original paper like it has been done for the M4 competition github. This would ease the comparison of the N-beats model with others and increase the visibility of the N-BEATS paper.
Beside reproducing the experimental results presented in the paper, it is often more convenient to rely on precomputed forecast to compare different model on the same dataset. For instance, the M4 competition's github repository provides the point forecast of all the models submitted in compressed csv files which permits comparing the models per individual series. We can evaluate the performance of ES-RNN and FFORMA using different loss functions but not the N-beats model. Thankfully, this is the case for ES-RNN and FFORMA given the execution time to reproduce their forecasts. It would be great if N-BEATS wouldn't fall under the reproducibility exception. The argument holds for the other dataset evaluated.
Great repo btw!
The text was updated successfully, but these errors were encountered: