You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whilst working on #7, it became clear that we are using multiple versions of normal transformations, tensorflow bijector chains, scaling and uniform distribution quantiles.
This should ideally be more unified, so that at least we are using the same functions for transformations (rather than tensorflow_probability.distribution functions and tensorflow_probability.bijector functions), but even better would be to include the pre-processing _forward_transform and _inverse_transform as part of the full bijector chain.
The text was updated successfully, but these errors were encountered:
Okay so been a while since this issue was opened and the code has changed a bit but the _forward_transform and _inverse_transform functions still form an integral part of the code. A more consistent way to incorporate the gaussianization of the data (see figure 2 in this paper) that these functions perform is to define
for the maf. We can then remove the normalisation step for theta in _training I think. This needs testing and I still need to work out what to do with the log probability and sampling but I will try and look into it.
Whilst working on #7, it became clear that we are using multiple versions of normal transformations, tensorflow bijector chains, scaling and uniform distribution quantiles.
This should ideally be more unified, so that at least we are using the same functions for transformations (rather than tensorflow_probability.distribution functions and tensorflow_probability.bijector functions), but even better would be to include the pre-processing
_forward_transform
and_inverse_transform
as part of the full bijector chain.The text was updated successfully, but these errors were encountered: