-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare target models before running attacks #249
Conversation
@@ -151,6 +151,8 @@ def configure_gradient_clipping( | |||
for group in optimizer.param_groups: | |||
self.gradient_modifier(group["params"]) | |||
|
|||
# Turn off the inference mode, so we will create perturbation that requires gradient. | |||
@torch.inference_mode(False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this necessary now? I thought PL manages this already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anomalib turns on the inference mode as we run anomalib test
.
MART's trainer turns off the inference mode by default, as in
MART/mart/configs/trainer/default.yaml
Line 19 in 05886f7
inference_mode: False |
But Anomalib has its own trainer.
self.training = self.module.training | ||
self.module.train(True) | ||
# Set some children modules of "excludes" to eval mode instead. | ||
self.selective_eval_mode("", self.module, self.excludes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is going on with the empty string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't know the variable name of the model, so the module path starts with a dot. This is for debug logging only, which print messages like this
Set .model.student_model.feature_extractor.layer3[1].bn1: BatchNorm2d to eval mode.
with MonkeyPatch(pl_module, "log", lambda *args, **kwargs: None): | ||
outputs = pl_module.training_step(batch, dataloader_idx) | ||
with training_mode( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the use case here? Are you seeing train-specific code diverging from eval-specific code in some use case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Many model implementations return the prediction in the eval mode, and return the loss in the training mode.
In our use case, anomalib test
runs the model in the eval mode, in which we won't get the loss.
What does this PR do?
This PR adds two preparations before running attacks in an external Lightning pipeline.
training_step()
.Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest
CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16
reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2
reports 70% (14 sec/epoch).Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃