ShinkaiGAN
ShinkaiGAN is an image-to-image translation model designed to transform sketch images into beautiful anime scenes inspired by the style of Makoto Shinkai. This model utilizes a Hybrid Perception Block U-Net architecture to achieve high-quality image-to-image translation.
Install / Use
/learn @echelon2718/ShinkaiGANREADME
ShinkaiGAN
ShinkaiGAN is a deep learning model designed to transform sketch images into beautiful anime scenes inspired by the style of Makoto Shinkai. This model utilizes a Hybrid Perception Block U-Net architecture to achieve high-quality image-to-image translation. In order to stabilize training process, we adopt the progressive training techniques as Karras, et. al. proposed to train ProGAN and StyleGANs.
Model Architecture
The core of ShinkaiGAN is based on UNet with the Hybrid Perception Block architecture.
Dataset
The model is trained on a custom dataset that includes:
- High-resolution anime scenes from various Makoto Shinkai movies (currently is not public yet due to copyright, this will be updated soon).
- Corresponding sketch images manually created or extracted using edge detection algorithms.
Usage
To use ShinkaiGAN, follow these steps:
-
Clone the Repository:
git clone https://github.com/yourusername/ShinkaiGAN.git cd ShinkaiGAN -
Install Dependencies:
pip install -r requirements.txt -
Run Training:
python train.py \ --src_dir "/path/to/source/directory" \ --tgt_dir "/path/to/target/directory" \ --lvl1_epoch 10 \ --lvl2_epoch 20 \ --lvl3_epoch 30 \ --lvl4_epoch 40 \ --lambda_adv 1.0 \ --lambda_ct 0.1 \ --lambda_up 0.01 \ --lambda_style 0.01 \ --lambda_color 0.001 \ --lambda_grayscale 0.01 \ --lambda_tv 0.001 \ --lambda_fml 0.01 \ --device cuda
Results
Here are some examples of sketch-to-anime transformations using ShinkaiGAN:
| Sketch | Anime Scene |
|--------|--------------|
| |
|
|
|
|
Contributing
We welcome contributions to improve ShinkaiGAN. If you would like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch). - Commit your changes (
git commit -am 'Add new feature'). - Push to the branch (
git push origin feature-branch). - Create a new Pull Request.
License
This project is licensed under the CC BY-NC-ND License. See the LICENSE file for details.
References
- Zheng, W., Li, Q., Zhang, G., Wan, P., & Wang, Z. (2022). ITTR: Unpaired Image-to-Image Translation with Transformers. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2203.16015
- Ronneberger, O., Fischer, P., & Brox, T. (2015, May 18). U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv.org. https://arxiv.org/abs/1505.04597
- Torbunov, D., Huang, Y., Tseng, H.-H., Yu, H., Huang, J., Yoo, S., Lin, M., Viren, B., & Ren, Y. (2023, September 22). UVCGAN v2: An Improved Cycle-Consistent GAN for Unpaired Image-to-Image Translation. ArXiv.org. https://doi.org/10.48550/arXiv.2303.16280
- Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation. ArXiv.org. https://arxiv.org/abs/1710.10196
- Karras, T., Laine, S., & Aila, T. (2018). A Style-Based Generator Architecture for Generative Adversarial Networks. ArXiv.org. https://arxiv.org/abs/1812.04948
- AnimeGANv3: A Novel Double-Tail Generative Adversarial Network for Fast Photo Animation. (n.d.). Tachibanayoshino.github.io. Retrieved June 25, 2024, from https://tachibanayoshino.github.io/AnimeGANv3/
- Liu, G., Chen, X., & Hu, Y. (2019). Anime Sketch Coloring with Swish-Gated Residual U-Net. Communications in Computer and Information Science, 190–204. https://doi.org/10.1007/978-981-13-6473-0_17
Related Skills
clearshot
Structured screenshot analysis for UI implementation and critique. Analyzes every UI screenshot with a 5×5 spatial grid, full element inventory, and design system extraction — facts and taste together, every time. Escalates to full implementation blueprint when building. Trigger on any digital interface image file (png, jpg, gif, webp — websites, apps, dashboards, mockups, wireframes) or commands like 'analyse this screenshot,' 'rebuild this,' 'match this design,' 'clone this.' Skip for non-UI images (photos, memes, charts) unless the user explicitly wants to build a UI from them. Does NOT trigger on HTML source code, CSS, SVGs, or any code pasted as text.
openpencil
2.2kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
HappyColorBlend
HappyColorBlendVibe Project Guidelines Project Overview HappyColorBlendVibe is a Figma plugin for color palette generation with advanced tint/shade blending capabilities. It allows designers to
Flyaro-waffle-app
Waffle Delight - Full Stack MERN Application Rules & Documentation Project Overview A comprehensive waffle delivery application built with MERN stack featuring premium UI/UX, admin management, a
