MultiTextureSynthesis
The source code of CVPR17 'Diversified Texture Synthesis with Feed-forward Networks'.
Install / Use
/learn @Yijunmaverick/MultiTextureSynthesisREADME
MultiTextureSynthesis
Torch implementation of our CVPR17 paper on multi-texture synthesis.
Prerequisites
- Linux
- NVIDIA GPU + CUDA CuDNN
- Torch
- Pretrained VGG model (download and put it under data/pretrained/)
Task 1: Diverse synthesis
We first realize the diverse synthesis on single-texture. Given one texture example, the generator should be powerful enough to combine elements in various way.
- Training
th single_texture_diverse_synthesis_train.lua -texture YourTextureExample.jpg -image_size 256 -diversity_weight -1.0
- Testing
th single_texture_diverse_synthesis_test.lua
After obtaining all diverse results, run gif.m (data/test_out/) in Matlab to convert them to an .avi video for view.
To plot the stored training loss (.json file) for any usage,
python plot_loss.py
Task 2: Multi-texture synthesis
- Training
Collect your texture image set (e.g., data/texture60/) before the training.
th multi_texture_synthesis_train.lua
- Testing
We release a 60-texture synthesis model that synthesizes the provided 60-texture set (ind_texture =1,2,...,60) in data/texture60/ folder.
th multi_texture_synthesis_test.lua -ind_texture 24
Task 3: Multi-style transfer
In the synthesis, each bit in the selection unit represents a texture example. In the transferring, we employ a set of selection maps where each map represents one style image when initalized as a noise map (e.g., from the uniform distribution).
Collect your style image set (e.g., data/style1000/) before the training. For large number of style images (e.g., 1000), it is suggested to convert all images (e.g., ,jpg) to a HDF5 file for fast reading.
th convertHDF5.lua -images_path YourImageSetPath -save_to XXX.hdf5 -resize_to 512
- Training
th multi_style_transfer_train.lua -image_size 512
- Testing
We release a 1000-style transfer model that transfers this 1000-style set (ind_texture =1,2,...,1000).
th multi_style_transfer_test.lua
Citation
@inproceedings{DTS-CVPR-2017,
author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
title = {Diversified Texture Synthesis with Feed-forward Networks},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2017}
}
Acknowledgement
- Codes are heavily borrowed from popular implementations of several great work, including NeuralArt, TextureNet, and FastNeuralArt.
Related Skills
node-connect
347.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
