You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I understand,the input of ditribution mathing should be the final output of generator, but under the circumstance of multi-step generator, we use the intermediate estimated x_0|t as the input,why?
The text was updated successfully, but these errors were encountered:
you are right. However, using the final output requires us to back propagate through the time (or step) which will adds more gpu memory consumption. It should be possible to do this with some system optimization but we didn't get chance to try because of this concern
Thanks. I think I understand something: the multi-step generator is like an LCM model. The difference is that LCM needs to align the final output x_0 of all steps on the ODE, while DMD needs to match the distribution to the pre-trained DM at all steps on the ODE.
As far as I understand,the input of ditribution mathing should be the final output of generator, but under the circumstance of multi-step generator, we use the intermediate estimated x_0|t as the input,why?
The text was updated successfully, but these errors were encountered: