You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in today's call, I think that adding the full fieldmap into nitransforms would be difficult. But thinking a bit more about voxel shifts, I think we can do that fairly straightforwardly as an argument to apply():
classTransformBase:
defapply(
self,
spatialimage,
reference=None,
voxel_shift_map=None,
order=3,
mode="constant",
cval=0.0,
prefilter=True,
output_dtype=None,
):
...
# Ignoring transposes and homogeneous coordinates for brevityrascoords=self.map(reference.ndcoords)
voxcoords=Affine(spatialimage.affine).map(rascoords).reshape((reference.ndim, *reference.shape))
ifvoxel_shift_map:
# voxel_shift_map must have shape (reference.ndim, *reference.shape)# Alternately, we could accept it in (*reference.shape, reference.ndim) and roll axesvoxcoords+=voxel_shift_mapresampled=ndi.map_coordinates(
data,
voxcoords,
output=output_dtype,
order=order,
mode=mode,
cval=cval,
prefilter=prefilter,
)
Because map operates on RAS coordinates and not voxel indices, we cannot use it in that context, so we probably do not want to include it as part of the transform itself.
We specifically do not want to describe voxel shift maps in the world space of the target image. While it may be possible to fit it at the end of the chain, after motion correction transforms, any solution would be more complicated than the above.
Per-volume transformations
The above discussion works for an individual volume. In order to correctly handle VSMs in a motion-corrected frame, we need TransformChains to become aware that they are involved in a per-volume transform. Unfortunately, right now, TransformChains are iterable over transforms, while LinearTransformsMapping are iterable over volumes, which at the very least means straightforward API composition isn't going to work.
Currently, LinearTransformsMapping operates in apply():
A VSM+multivolume-aware TransformChain could do what we want in apply(). Another thought is that we could treat transforms as data objects and not actors. The interface could be:
If we give up on defining apply() correctly for each transform, and leave them to focus on composing and mapping, it might make things cleaner. Just imagining how we might approach chains that include per-volume transforms:
classTransformBase:
n_transforms: int=1defiter_transforms(self) ->Iterator[TransformBase]:
"""Repeat current transform as often as required"""returnitertools.repeat(self)
classAffineSeries(TransformBase):
@propertydefn_transforms(self) ->int:
returnlen(self.series)
defiter_transforms(self) ->Iterator[TransformBase]:
"""Iterate over the defined series"""returniter(self.series)
classTransformChain(TransformBase):
@propertydefn_transforms(self) ->int:
lengths= [xfm.n_transformsforxfminself.chainifxfm.n_transforms!=1]
returnmin(lengths) iflengthselse1defiter_transforms(self) ->Iterator[TransformChain]:
"""Iterate over all transforms in chain, simultaneously, stopping with first to stop"""returnmap(TransformChain, zip(*(xfm.iter_transforms() forxfminself.chain)))
The text was updated successfully, but these errors were encountered:
Voxel shift maps
As mentioned in today's call, I think that adding the full fieldmap into nitransforms would be difficult. But thinking a bit more about voxel shifts, I think we can do that fairly straightforwardly as an argument to
apply()
:Because
map
operates on RAS coordinates and not voxel indices, we cannot use it in that context, so we probably do not want to include it as part of the transform itself.We specifically do not want to describe voxel shift maps in the world space of the target image. While it may be possible to fit it at the end of the chain, after motion correction transforms, any solution would be more complicated than the above.
Per-volume transformations
The above discussion works for an individual volume. In order to correctly handle VSMs in a motion-corrected frame, we need
TransformChain
s to become aware that they are involved in a per-volume transform. Unfortunately, right now,TransformChain
s are iterable over transforms, whileLinearTransformsMapping
are iterable over volumes, which at the very least means straightforward API composition isn't going to work.Currently,
LinearTransformsMapping
operates inapply()
:nitransforms/nitransforms/linear.py
Lines 395 to 498 in 1674e86
A VSM+multivolume-aware
TransformChain
could do what we want inapply()
. Another thought is that we could treat transforms as data objects and not actors. The interface could be:If we give up on defining
apply()
correctly for each transform, and leave them to focus on composing and mapping, it might make things cleaner. Just imagining how we might approach chains that include per-volume transforms:The text was updated successfully, but these errors were encountered: