You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, it is written nuscenes is trained on 72 epochs (18, 6, and 48 epochs for the three stages). The batch size of three training stages are 16, 48, and 16, respectively.
However the number are deviating form the configs. In configs for nuscens new splits the three stages are 18-, 4-, 36- epochs with 3- , 8- and 4 in batch sizes (samples per gpu) Thus the total epochs are 58, and batch sizes not matching the paper. Which are the results reported for?
I observe similar difference for configs for the old split, in addition why are they different from new split different hyperparameters?
The text was updated successfully, but these errors were encountered:
Thanks for cool work!
In the paper, it is written nuscenes is trained on 72 epochs (18, 6, and 48 epochs for the three stages). The batch size of three training stages are 16, 48, and 16, respectively.
However the number are deviating form the configs. In configs for nuscens new splits the three stages are 18-, 4-, 36- epochs with 3- , 8- and 4 in batch sizes (samples per gpu) Thus the total epochs are 58, and batch sizes not matching the paper. Which are the results reported for?
I observe similar difference for configs for the old split, in addition why are they different from new split different hyperparameters?
The text was updated successfully, but these errors were encountered: