Using SinGAN Style Transfer for creating computer vision input data
-
Updated
Jan 14, 2022 - Mathematica
Using SinGAN Style Transfer for creating computer vision input data
Flux.jl implementation of (part of) "SinGAN: Learning a Generative Model from a Single Natural Image"
pytorch implementation
Using pix2pix and SinGAN to get into the movie
Multi-scale style transfer with a pyramid of fully convolutional GANs inspired from "SinGAN: Learning a Generative Model from a Single Natural Image" (ICCV 2019)
Students Project at the Technion for generating natural looking images from a single image, using deep features of VGG19 and a hierarchical architecture based on SinGAN
Inofficial implementation of the paper "SinGAN: Learning a Generative Model from a Single Natural Image"
New Transformer network-based GAN for video generation.
Reimplementing the paper "SinGAN: Learning a Generative Model from a Single Natural Image"
Code for the final project of MVA course "Object Recognition and Computer Vision". Application of SinGAN to style transfer
Implemented basic deep learning models using PyTorch
GUI for TOAD-GAN, a PCG-ML algorithm for Token-based Super Mario Bros. Levels.
"SinGAN : Learning a Generative Model from a Single Natural Image" in TensorFlow 2
Official repository for "TOAD-GAN: Coherent Style Level Generation from a Single Example" by Maren Awiszus, Frederik Schubert and Bodo Rosenhahn.
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"
Add a description, image, and links to the singan topic page so that developers can more easily learn about it.
To associate your repository with the singan topic, visit your repo's landing page and select "manage topics."