Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute loss over the entire mask #5

Open
hermancollin opened this issue Jun 1, 2023 · 0 comments
Open

Compute loss over the entire mask #5

hermancollin opened this issue Jun 1, 2023 · 0 comments

Comments

@hermancollin
Copy link
Collaborator

hermancollin commented Jun 1, 2023

Right now, the loss is propagated backwards for every axon for every mask, and the model parameters are updated at the same time. This means that there are around 10k+ updates every epoch, which gives a very slow training (~ 15 minutes/epoch).

Additionally, I'm now using monai.losses.DiceLoss() as a loss function and I get dice scores that are very close to 1 (mean epoch loss 0.995) because a single myelin sheath occupies a very small percentage of the image.

These 2 issues can both be solved by computing the loss only once for every image. To do this, the prediction of every axon in an image should be summed into one segmentation mask, and the loss computed over this. This way, there would be only 158 parameter updates at every epoch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant