-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have a trajectory learner example and reform the documentation build flow #167
Conversation
Co-authored-by: Pierre-Antoine Comby <[email protected]>
Co-authored-by: Pierre-Antoine Comby <[email protected]>
Co-authored-by: Pierre-Antoine Comby <[email protected]>
Co-authored-by: Pierre-Antoine Comby <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
assert_almost_allclose( | ||
adj_data.cpu().detach(), | ||
adj_data_ndft.cpu().detach(), | ||
atol=1e-1, | ||
rtol=1e-1, | ||
mismatch=20, | ||
) | ||
|
||
# Check if nufft and ndft w.r.t trajectory are close in the backprop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to add this check. Turns out NDFT was not right (see above) for SENSE.
assert_almost_allclose( | ||
gradient_ndft_ktraj.cpu().numpy(), | ||
gradient_nufft_ktraj.cpu().numpy(), | ||
atol=1e-2, | ||
rtol=1e-2, | ||
mismatch=20, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were tooo easy on the testing and hence we missed this bug
if self.uses_sense: | ||
self.smaps = self.smaps.conj() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out, if we have SENSE, while we call fourier_op.op
with toggled plans, we still need to maintain smaps.conj()
. Fixed that here for all the usecases
@@ -40,8 +40,9 @@ def assert_almost_allclose(a, b, rtol, atol, mismatch, equal_nan=False): | |||
try: | |||
npt.assert_allclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan) | |||
except AssertionError as e: | |||
e.message += "\nMismatched elements: " | |||
e.message += f"{np.sum(~val)} > {mismatch}(={mismatch_perc*100:.2f}%)" | |||
message = getattr(e, "message", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PEP562, we dont always has message
attribute :P
I will add @paquiteau to go through some "new" changes :P @alineyyy if you already started your review, please watchout for a lot of new changes... Can you please tell if the code is understandable and if required we can add more comments for clarity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ! Thanks for catching all these bugs !
grad_traj = torch.transpose( | ||
torch.sum(grad_traj, dim=1), | ||
0, | ||
1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for making this step easier!!
def samples(self, value): | ||
self._samples_torch = value | ||
self.nufft_op.samples = value.detach().cpu().numpy() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this setter! It really avoids redundantly moving the device of the samples in the test!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you for making these changes to make it simpler for users to use! (including me XD) The codes and the comments are also clear enough for me to understand!
This is updated version of #142 , with rebase at #156 . Ideally we need #156 to go in after which we can merge this. Opened a new PR cause I dont know why I ended up with 2 branches and started working on this one :P