You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# The first 2/3 of 'probs' is the negative class (normal and noisy samples),
# and the last 1/3 is the positive class (adversarial samples).
_, _, auc_score=compute_roc(
probs_neg=probs[:2*n_samples],
probs_pos=probs[2*n_samples:]
)
print('Detector ROC-AUC score: %0.4f'%auc_score)
Evaluating on training data leads to a biased result. Did I miss anything?
The text was updated successfully, but these errors were encountered:
davidglavas
changed the title
Logistic regression classifier is evaluated on data it has been trained on?
Final logistic regression classifier is evaluated on data it has been trained on?
Dec 7, 2018
davidglavas
changed the title
Final logistic regression classifier is evaluated on data it has been trained on?
Final detector is evaluated on data it has been trained on?
Dec 7, 2018
That's what I thought.
It seems that the ROC-AUC evaluation uses the same data that was used in the training of a model.
Is this implementation as the authors intended?
The code that creates the detector (linear regression classifier):
detecting-adversarial-samples/scripts/detect_adv_samples.py
Lines 149 to 155 in 2c26b60
Variables
values
andlabels
returned bytrain_lr()
represent the training data that the model has been trained on:At the end the detector is evaluated on the data it was trained on (line 159 uses
values
which is the training data returned bytrain_lr
):detecting-adversarial-samples/scripts/detect_adv_samples.py
Lines 157 to 168 in 2c26b60
Evaluating on training data leads to a biased result. Did I miss anything?
The text was updated successfully, but these errors were encountered: