Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train/valid probs #63

Open
danpovey opened this issue May 30, 2018 · 7 comments
Open

train/valid probs #63

danpovey opened this issue May 30, 2018 · 7 comments

Comments

@danpovey
Copy link
Contributor

Guys,
We should have a mechanism to compute either accuracies on a subset of training data, or objective function values on validation data. (Are we doing this already?)
This will show whether our model is underfitting or overfitting, which right now I have no idea about.

@YiwenShaoStephen
Copy link
Contributor

train.py will give you the BCE loss on train and val data. If you want to further see the segmentation results on train and val, you need to run segment.py on them on using the scoring.py to get the MAP (mean average precision)

@hhadian
Copy link
Contributor

hhadian commented May 30, 2018 via email

@danpovey
Copy link
Contributor Author

danpovey commented May 30, 2018 via email

@YiwenShaoStephen
Copy link
Contributor

Almost the same. So there is no overfitting yet. I think heavier image augmentation will help since the dataset is relatively small.

@danpovey
Copy link
Contributor Author

danpovey commented May 30, 2018 via email

@YiwenShaoStephen
Copy link
Contributor

OK, will try it.

@hhadian
Copy link
Contributor

hhadian commented May 30, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants