You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the loss is propagated backwards for every axon for every mask, and the model parameters are updated at the same time. This means that there are around 10k+ updates every epoch, which gives a very slow training (~ 15 minutes/epoch).
Additionally, I'm now using monai.losses.DiceLoss() as a loss function and I get dice scores that are very close to 1 (mean epoch loss 0.995) because a single myelin sheath occupies a very small percentage of the image.
These 2 issues can both be solved by computing the loss only once for every image. To do this, the prediction of every axon in an image should be summed into one segmentation mask, and the loss computed over this. This way, there would be only 158 parameter updates at every epoch.
The text was updated successfully, but these errors were encountered:
Right now, the loss is propagated backwards for every axon for every mask, and the model parameters are updated at the same time. This means that there are around 10k+ updates every epoch, which gives a very slow training (~ 15 minutes/epoch).
Additionally, I'm now using
monai.losses.DiceLoss()
as a loss function and I get dice scores that are very close to 1 (mean epoch loss 0.995) because a single myelin sheath occupies a very small percentage of the image.These 2 issues can both be solved by computing the loss only once for every image. To do this, the prediction of every axon in an image should be summed into one segmentation mask, and the loss computed over this. This way, there would be only 158 parameter updates at every epoch.
The text was updated successfully, but these errors were encountered: