Pytorch implementation of a segmentation-guided approach to synthesise images that integrate features from two distinct domains.
- Clone Dual-Domain-Synthesis repository:
git clone [email protected]:denabazazian/Dual-Domain-Synthesis.git
cd Dual-Domain-Synthesis
- Create the conda environment:
conda env create -f DDS.yml
- After installing the required environment dependencies:
conda activate DDS
-
Save the segmentation models:
- Code for train a segmentation model based on one labeled data. This code can be used to train a segmentation model to make the eyes/nose/mouth and hair masks.
- Generative model for natural faces can be downloaded from: Link
- This code requires around 5GB of GPU memory and 5 minutes for running. There is a graphical user interface in the segmentation code to label the one-shot ground-truth for the training.
python save_segmentation_model.py --generator_dir path/to/generator --segmentation_dir path/to/segmentation --part_key eyes
-
Save the source and target generative models:
- For instance from stylegan2-pytorch and few-shot-gan-adaptation
-
Run the DDS code:
First code is to run on a random sample_z, second code is for reproducing the results of Figure 5 in the paper by loading their sample_z, third code can be use on a random sample_z or loading a sample_z for saving the iterations of latent optimiser and making a gif of the iterations to see the process.
(Each one of these following three codes requires around 5GB of GPU memory and 5 minutes for running.)- run a random sample_z:
python DDS_main.py --generator_domain1_dir path/to/generator1 --generator_domain2_dir path/to/generator2 --segmentation_dir path/to/segmentation_root --part_key eyes_nose_mouth --save_path_root path/to/save_root
- load a sample_z:
(The latent code [sample_z] of the examples in Figure 5 of the paper to reproduce the results.)
python DDS_main.py --generator_domain1_dir path/to/generator1 --generator_domain2_dir path/to/generator2 --segmentation_dir path/to/segmentation_root --part_key eyes_nose_mouth --save_path_root path/to/save_root --sample_z_path path/to/sampleZ
- save iterations:
python DDS_main.py --generator_domain1_dir path/to/generator1 --generator_domain2_dir path/to/generator2 --segmentation_dir path/to/segmentation_root --part_key eyes_nose_mouth --save_path_root path/to/save_root --save_iterations_path iterations --sample_z_path path/to/sampleZ
- make a gif of the iterations:
cd path/to/iterations
- domain1:
convert -delay 20 -loop 0 *_D1.png DDS_D1.gif
- domain2:
convert -delay 20 -loop 0 *_D2.png DDS_D2.gif
- domain1:
- make a gif of the iterations:
Source | Target | Dual-Domain | Latent optimisation |
---|---|---|---|
If you find our code useful, please cite our paper:
@InProceedings{Bazazian_2022_CVPR,
author = {Bazazian, Dena and Calway, Andrew and Damen, Dima},
title = {Dual-Domain Image Synthesis Using Segmentation-Guided GAN},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2022},
pages = {507-516}
}
As mentioned before, the StyleGAN2 model is borrowed from stylegan2-pytorch, domain adaptation models are borrowed from few-shot-gan-adaptation, code for the perceptual model is borrowed from StyleGAN_LatentEditor and source code for training the segmentation models is borrowed from repurpose-gan.