[Project Page] [Paper] [Demo Video]
Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our model is able to synthesize possible day images with different types of lighting, sky and clouds. The training requires paired data.
Toward Multimodal Image-to-Image Translation.
Jun-Yan Zhu,
Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman.
UC Berkeley and Adobe Research
In NIPS, 2017.
- [Tensorflow] by Youngwoon Lee (USC CLVR Lab).
- [Tensorflow] by Kv Manohar.
- Linux or macOS
- Python 2 or 3
- CPU or NVIDIA GPU + CUDA CuDNN
- Clone this repo:
git clone -b master --single-branch https://github.com/junyanz/BicycleGAN.git
cd BicycleGAN
- Install PyTorch and dependencies from http://pytorch.org
- Install python libraries visdom, dominate, and moviepy.
For pip users:
bash ./scripts/install_pip.sh
For conda users:
bash ./scripts/install_conda.sh
- Download some test photos (e.g., edges2shoes):
bash ./datasets/download_testset.sh edges2shoes
- Download a pre-trained model (e.g., edges2shoes):
bash ./pretrained_models/download_model.sh edges2shoes
- Generate results with the model
bash ./scripts/test_shoes.sh
The test results will be saved to a html file here: ./results/edges2shoes/val/index.html
.
- Generate results with synchronized latent vectors
bash ./scripts/test_edges2shoes.sh --sync
Results can be found at ./results/edges2shoes/val_sync/index.html
.
bash ./scripts/video_edges2shoes.sh
Results can be found at ./videos/edges2shoes/
.
- To train a model, download the training images (e.g., edges2shoes).
bash ./datasets/download_dataset.sh edges2shoes
- Train a model:
bash ./datasets/train_edges2shoes.sh
- To view training results and loss plots, run
python -m visdom.server
and click the URL http://localhost:8097. To see more intermediate results, check out./checkpoints/edges2shoes_bicycle_gan/web/index.html
- See more training details for other datasets in
./scripts/train.sh
.
Download the datasets using the following script. Many of the datasets are collected by other researchers. Please cite their papers if you use the data.
- Download the testset.
bash ./datasets/download_testset.sh dataset_name
- Download the training and testset.
bash ./datasets/download_dataset.sh dataset_name
facades
: 400 images from CMP Facades dataset. [Citation]maps
: 1096 training images scraped from Google Mapsedges2shoes
: 50k training images from UT Zappos50K dataset. Edges are computed by HED edge detector + post-processing. [Citation]edges2handbags
: 137K Amazon Handbag images from iGAN project. Edges are computed by HED edge detector + post-processing. [Citation]
Download the pre-trained models with the following script.
bash ./pretrained_models/download_model.sh model_name
edges2shoes
(edge -> photo) trained on UT Zappos50K dataset.edges2handbags
(edge -> photo) trained on Amazon handbags images..
bash ./pretrained_models/download_model.sh edges2handbags
bash ./datasets/download_testset.sh edges2handbags
bash ./scripts/test_edges2handbags.sh
night2day
(nighttime scene -> daytime scene) trained on around 100 webcams.
bash ./pretrained_models/download_model.sh night2day
bash ./datasets/download_testset.sh night2day
bash ./scripts/test_night2day.sh
facades_label2image
(facade label -> facade photo) trained on the CMP Facades dataset.
bash ./pretrained_models/download_model.sh facades_label2image
bash ./datasets/download_testset.sh facades
bash ./scripts/test_facades.sh
map2aerial
(map photo -> aerial photo) trained on 1096 training images scraped from Google Maps.
bash ./pretrained_models/download_model.sh map2aerial
bash ./datasets/download_testset.sh maps
bash ./scripts/test_maps.sh
If you find this useful for your research, please use the following.
@incollection{zhu2017multimodal,
title = {Toward Multimodal Image-to-Image Translation},
author = {Zhu, Jun-Yan and Zhang, Richard and Pathak, Deepak and Darrell, Trevor and Efros, Alexei A and Wang, Oliver and Shechtman, Eli},
booktitle = {Advances in Neural Information Processing Systems 30},
year = {2017},
}
This code borrows heavily from the pytorch-CycleGAN-and-pix2pix repository.