-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train/valid probs #63
Comments
train.py will give you the BCE loss on train and val data. If you want to further see the segmentation results on train and val, you need to run segment.py on them on using the scoring.py to get the MAP (mean average precision) |
But, the segmentation algorithm also produces a final logprob for each
image. I guess it would be helpful to write that to disk too (and maybe
also its average on a whole test set).
…On Thu, May 31, 2018 at 1:13 AM, Yiwen Shao ***@***.***> wrote:
train.py will give you the BCE loss on train and val data. If you want to
further see the segmentation results on train and val, you need to run
segment.py on them on using the scoring.py to get the MAP (mean average
precision)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#63 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AOW_Dbdi-Y_cr-Vm47LL9WtkK5MfHxjSks5t3wR-gaJpZM4UT-V4>
.
|
OK. How different are the BCE losses for train and val, in the DBS2018
setup?
…On Wed, May 30, 2018 at 4:43 PM, Yiwen Shao ***@***.***> wrote:
train.py will give you the BCE loss on train and val data. If you want to
further see the segmentation results on train and val, you need to run
segment.py on them on using the scoring.py to get the MAP (mean average
precision)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#63 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADJVu5XIU0VkVwOg0GXryVC-QhfIKi59ks5t3wR9gaJpZM4UT-V4>
.
|
Almost the same. So there is no overfitting yet. I think heavier image augmentation will help since the dataset is relatively small. |
Hossein, that's a good point. Perhaps we could produce a log line per
file, summarizing various stats relating to the segmentation. I assume
the stderr (and probably stdout too) of the segmenter code gets put in a
log file.
Yiwen, regarding the train/valid objective values: I think you can safely
increase the number of parameters in the model until you see overfitting,
and at that point start to worry about image augmentation. (I see image
augmentation as primarily a way to reduce overfitting by artificially
expanding the amount of training data).
On Wed, May 30, 2018 at 4:47 PM, Hossein Hadian <[email protected]>
wrote:
… But, the segmentation algorithm also produces a final logprob for each
image. I guess it would be helpful to write that to disk too (and maybe
also its average on a whole test set).
On Thu, May 31, 2018 at 1:13 AM, Yiwen Shao ***@***.***>
wrote:
> train.py will give you the BCE loss on train and val data. If you want to
> further see the segmentation results on train and val, you need to run
> segment.py on them on using the scoring.py to get the MAP (mean average
> precision)
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#63 (comment)>,
or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AOW_Dbdi-Y_cr-
Vm47LL9WtkK5MfHxjSks5t3wR-gaJpZM4UT-V4>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#63 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADJVu5oR8Iiz68AIcf4jPMfOxKF9ReIPks5t3wVNgaJpZM4UT-V4>
.
|
OK, will try it. |
Will do it.
On Thu, May 31, 2018 at 1:32 AM, Daniel Povey <[email protected]>
wrote:
… Hossein, that's a good point. Perhaps we could produce a log line per
file, summarizing various stats relating to the segmentation. I assume
the stderr (and probably stdout too) of the segmenter code gets put in a
log file.
Yiwen, regarding the train/valid objective values: I think you can safely
increase the number of parameters in the model until you see overfitting,
and at that point start to worry about image augmentation. (I see image
augmentation as primarily a way to reduce overfitting by artificially
expanding the amount of training data).
On Wed, May 30, 2018 at 4:47 PM, Hossein Hadian ***@***.***>
wrote:
> But, the segmentation algorithm also produces a final logprob for each
> image. I guess it would be helpful to write that to disk too (and maybe
> also its average on a whole test set).
>
>
> On Thu, May 31, 2018 at 1:13 AM, Yiwen Shao ***@***.***>
> wrote:
>
> > train.py will give you the BCE loss on train and val data. If you want
to
> > further see the segmentation results on train and val, you need to run
> > segment.py on them on using the scoring.py to get the MAP (mean average
> > precision)
> >
> > —
> > You are receiving this because you are subscribed to this thread.
> > Reply to this email directly, view it on GitHub
> > <#63 (comment)>,
> or mute
> > the thread
> > <https://github.com/notifications/unsubscribe-auth/AOW_Dbdi-Y_cr-
> Vm47LL9WtkK5MfHxjSks5t3wR-gaJpZM4UT-V4>
> > .
>
> >
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#63 (comment)>,
or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/
ADJVu5oR8Iiz68AIcf4jPMfOxKF9ReIPks5t3wVNgaJpZM4UT-V4>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#63 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AOW_Dcnq6lpxkGFVByLE2p1hwBfLdKxtks5t3wjmgaJpZM4UT-V4>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Guys,
We should have a mechanism to compute either accuracies on a subset of training data, or objective function values on validation data. (Are we doing this already?)
This will show whether our model is underfitting or overfitting, which right now I have no idea about.
The text was updated successfully, but these errors were encountered: