You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The above autoencoding works perfectly as expected. However, instead of using xT, if I use xT_rand with the same cond1, I get nothing but noise in the predicted image. Could you please help me understand why that happens? As mentioned in the paper, most of the semantic information is captured in z_sem, so why does it fails in this case?
Your response will be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered:
Hi,
Consider the following lines of code:
cond1 = model.encode(batch)
xT = model.encode_stochastic(batch, cond1, T=50)
pred = model.render(noise= xT , cond=cond1, T=20)
#xT_rand = torch.rand(xT.shape, device=device)
#pred_rand = model.render(noise= xT_rand , cond=cond1, T=20)
The above autoencoding works perfectly as expected. However, instead of using xT, if I use xT_rand with the same cond1, I get nothing but noise in the predicted image. Could you please help me understand why that happens? As mentioned in the paper, most of the semantic information is captured in z_sem, so why does it fails in this case?
Your response will be greatly appreciated.
Thank you!
The text was updated successfully, but these errors were encountered: