Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Adapt #1703

Draft
wants to merge 6 commits into
base: develop
Choose a base branch
from
Draft

WIP: Adapt #1703

wants to merge 6 commits into from

Conversation

davidslater
Copy link
Contributor

No description provided.

@mwartell mwartell self-assigned this Oct 20, 2022
@lcadalzo
Copy link
Contributor

I'm running the following code:

model = cifar10_model()
xy_batches, classes = cifar10_data()
attack = PGD_Linf(model)

for batch in xy_batches:
    x, y = batch
    x_adv = attack(x)

and getting the following error:

Traceback (most recent call last):
  File "run.py", line 9, in <module>
    x_adv = attack(x)
  File "/home/lucas/Desktop/gard/git/twosixlabs/armory/armory/adapt/pytorch.py", line 644, in __call__
    self.gradient()
  File "/home/lucas/Desktop/gard/git/twosixlabs/armory/armory/adapt/pytorch.py", line 582, in gradient
    self.loss = self.loss_fn(self.y_pred, self.y_target)
  File "/home/lucas/miniconda3/envs/armory-adapt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/lucas/Desktop/gard/git/twosixlabs/armory/armory/adapt/pytorch.py", line 511, in forward
    return super().forward(torch.log(input), target)
  File "/home/lucas/miniconda3/envs/armory-adapt/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 211, in forward
    return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
  File "/home/lucas/miniconda3/envs/armory-adapt/lib/python3.8/site-packages/torch/nn/functional.py", line 2689, in nll_loss
    return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: 0D or 1D target tensor expected, multi-target not supported

which I think occurs because self.y_target is an array of shape torch.Size([1, 10]) (the same one as self.y_pred) rather than an integer

@davidslater
Copy link
Contributor Author

Thanks, @lcadalzo, I hadn't tested with model predictions.

5) Minimal dependencies (framework-specific)

User stories:
a) Initialize an attack with an interesting input
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the user story Initialize an attack with an interesting input, not sure I'm following exactly what's meant since you can pass whatever to ART's generate() or in the case here,__call__(). Or do you mean the init within the epsilon ball that gets added to the input? If so, it looks like this code restricts you to either random or None.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the desire is to enable the init within the epsilon ball. The code needs to be modified to enable passing in an init function instead of those two choices.


# logits:
self.task_metric = lambda y_pred, y_true: (
y_pred.detach().argmax(dim=1) == y_true.detach()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just pointing out a challenge here. While setting this task_metric is simple enough for classification, this would break for non-classification tasks and is much less straightforward to define in those cases

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point taken

return tuple(position)


class PGD_RandomPatch(PGD_Patch2):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it necessary to have an additional class for patch pgd and random patch pgd, as opposed to this just being something set or defined in the former?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to collapse as much as possible - but wanted to start from simple attacks, move to complex ones, and then see how to combine them effectively.

# ])(epsilon)


class EoTPGD_Linf(PGD_Linf):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like the number of distinct classes needed is multiplying rapidly. Would you also need a class for EoT L2, EoT Patch, EoT random patch, etc.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have added a note on this one: this was meant to demonstrate how to extend a class to use EoT. I would remove this class, actually. Or, if EoT can be added as a wrapper around an existing class, then that would prevent that sort of class explosion.

I need a better way of handling / representing random components - that would enable specification of what you're performing the expectation over.

@mwartell mwartell removed their assignment Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants