19 skills found
carpedm20 / DiscoGAN PytorchPyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
SKTBrain / DiscoGANOfficial implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
jmiller656 / DiscoGAN TensorflowAn implementation of DiscoGAN in tensorflow
GunhoChoi / DiscoGAN TFTensorflow Implementation of DiscoGAN
nashory / Gans Collection.torchTorch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)
ChunyuanLI / DiscoGANTensorflow implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
suhoy901 / ImageTranslationpytorch, pix2pix, CycleGAN, DiscoGAN, BicycleGAN, UNIT, MUNIT, pix2pixHD, vid2vid
kartikgill / TF2 Keras GAN NotebooksGenerative Adversarial Networks with TensorFlow2, Keras and Python (Jupyter Notebooks Implementations)
shaform / DeepNetworksMy implementations of deep neural networks for practice.
dhgrs / Chainer DiscoGANA Chainer implementation of DiscoGAN.
ilguyi / DiscoGAN.tensorflow.slimNo description available
Kuntal-G / BooksThis repository contains code and bonus content which will be added from time to time for the books "Learning Generative Adversarial Network- GAN" and "R Data Analysis Cookbook - 2nd Edition" by Packt
taki0112 / DiscoGAN TensorflowSimple Tensorflow implementation of DiscoGAN
ChengBinJin / DiscoGAN TensorFlowDiscoGAN TensorFlow Implementation
clvrai / DiscoGAN TensorflowA Tensorflow implementation of DiscoGAN.
samacoba / DiscoGAN CounterNo description available
leenasuva / Behind The Mask Image Analytics Using GANsWhile the use of Generative Adversarial Networks (GANs) has been a breakthrough in the computer vision industry, there exist multiple styles of GANs that are well-tailored to solve specific problems. Behind the mask, though sounding trivial, points to a critical use case. The situation represents the unsupervised image to image translation by discovering distinctive features from the first set and generating images belonging to the other set by learning distinctions between these two. This technique is more feasible for problems where paired images are not available. Using algorithms like Pix2pix is not viable since paired images are expensive and difficult to obtain. To tackle this problem, CycleGAN, DualGAN, and DiscoGAN provide an insight into which the models can learn the mapping from one image domain to another one with unpaired image data. But even in this case, since the problem is reconstructing human faces by removing their facial masks, which requires non-linear transformations, this is tricky. Moreover, the previously mentioned techniques also alter the background and make changes to unwanted objects as they try to create fake images through generators and discriminators. The goal is to implement an approach that not only detects discriminating factors between two sets of pictures but also generates images without altering the rest of the details and only targets specific areas of the image to change. One other technique that can be employed to address this could be to use Contrast GAN, which selects a part of an image, transforms that based on differentiating factors, and then pastes it back to the original image. However, this created an issue since the face masks used in our case had to be of the exact dimensions and identical, which was not the case. To overcome these challenges, we tried to employ an attention-based technique named AGGAN, Attention-Guided Generative Adversarial Networks, for image translation that does not require additional models/parameters to alter a specific part of the image. The AGGAN comprises two generators and two discriminators, like CycleGAN. Two attention-guided generators in AGGAN have built-in attention modules, which can disentangle the discriminative semantic object and the unwanted part by producing an attention mask and a content mask. The underlying image is fused with these masks to create quality fake images. We also consider additional losses to reduce the variance and make the related images pixel consistent. We think of a more sophisticated network by applying two possible subnets to identify the attention and content masks. To avoid omitting any details, the network employs two attention masks, one for the foreground and one for the background, so that the foreground can be better learned, and the background can be preserved. Also, in this case, the generative content mask is introduced to multiple types of facial masks to identify a broad spectrum of them and effectively remove them and create a more decadent generation space. To obtain high-quality unmasked images, we aim and expect to translate masked images to unmasked ones that can be employed on various faces with different skin colors and expressions.
zzdhxm12 / Generation Of Nail Art Designs Using DiscoGANGeneration of Nail Art Designs using DiscoGAN
dandelin / DiscoGAN TensorflowNo description available