You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper says that 'we can significantly reduce the training time to 4 hours if we initialize parts of our model with pretrained weights from EG3D'. As I known, EG3D just has weights for FFHQ dataset. But pix2pix3D uses CelebaHQ-Mask dataset. So I want to know what weights do you use?
Thank you.
The text was updated successfully, but these errors were encountered:
We use the FFHQ checkpoint of EG3D as it is also a dataset of human faces. We also report metrics without pre-trained weights for a fair comparison with other baselines.
At the same time, I also want to consult some designs of pix2pix3D.
For gt {Ic, Is}, you have adopted reconstruction loss of images. That means that for a given mask Is and a set of random z, all z will generate Ic for the given Is. Will this lead to a deterioration in the diversity of the model.
According to my understanding, without reconstruction loss for images, the network can also complete 'pix2pix' by monitoring the reconstruction loss of Is and the generated mask, and different random z can generate different results.
Thank you for your great work.
The paper says that 'we can significantly reduce the training time to 4 hours if we initialize parts of our model with pretrained weights from EG3D'. As I known, EG3D just has weights for FFHQ dataset. But pix2pix3D uses CelebaHQ-Mask dataset. So I want to know what weights do you use?
Thank you.
The text was updated successfully, but these errors were encountered: