Deep3D
Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning.
Install / Use
/learn @HypoX64/Deep3DREADME
Deep3D
Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning.<br> Inspired by piiswrong/deep3d, we rebuild the network on pytorch and optimize it in time domain and faster inference speed. So, try it and enjoy your own 3D movies.
<div align="center"> <img src="./medias/wood_result_360p.gif"><br> </div>Left is input video and right is output video with parallax.<br>
More examples:
Inference speed
| Plan | 360p (FPS) | 720p (FPS) | 1080p (FPS) | 4k (FPS) | | :----------------------: | :--------: | :--------: | :---------: | :------: | | GPU (2080ti) | 84 | 87 | 77 | 26 | | CPU (Xeon Platinum 8260) | 27.7 | 14.1 | 7.2 | 2.0 |
Run Deep3D
Prerequisites
- Linux, Mac OS, Windows
- Python 3.7+
- ffmpeg 3.4.6+
- Pytorch 1.7.1
- CPU or NVIDIA GPU<br>
Dependencies
This code depends on opencv-python available via pip install.
pip install opencv-python
Clone this repo
git clone https://github.com/HypoX64/Deep3D
cd Deep3D
Get Pre-Trained Models
You can download pre_trained models from: [Google Drive] [百度云,提取码xxo0 ] <br> Note:
- 360p can get the best result.
- The published models are not inference optimized.
- Models are still under training, 1080p and 4k models will be uploaded in the future.
Run it!
python inference.py --model ./export/deep3d_v1.0_640x360_cuda.pt --video ./medias/wood.mp4 --out ./result/wood.mp4 --inv
# some video need to reverse left and right views (--inv)
Acknowledgements
This code borrows heavily from [deep3d] [DeepMosaics]
