AutoEncoder loss values #425
Unanswered
MB-MuratBayraktar
asked this question in
Q&A
Replies: 1 comment
-
unsupervised autoencoder does not differentiate normal and abnormal samples during the learning process. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone, I am getting a loss value of 0.95 and val_loss of 0.93 after training on P100 GPU for 50 epochs. The loss is not getting lower after that. After reading about the way AutoEncoders work, I understand that they learn the pattern of the normal data first, then they predict the abnormal behavior when they cannot reconstruct easily (that's when they are shown anomaly points). But when we have an unsupervised problem (no labels available), what can we do? AS I noticed, here in PyOD, we train on (X_train) without taking into consideration the separation of normal/abnormal when training. could this be the result of high loss values?
One small thing to note, when I plot the features according to the output labels after prediction, they cluster very well, which is so strange with such loss value.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions