Tha3
Python package for `tha3` (Talking Head (?) Anime from A Single Image 3: Now the Body Too) (from which `EasyVtuber` was derived).
Install / Use
/learn @34j/Tha3README
Talking Head Anime from a Single Image 3
<p align="center"> <a href="https://github.com/34j/tha3/actions/workflows/ci.yml?query=branch%3Amain"> <img src="https://img.shields.io/github/actions/workflow/status/34j/tha3/ci.yml?branch=main&label=CI&logo=github&style=flat-square" alt="CI Status" > </a> <a href="https://tha3.readthedocs.io"> <img src="https://img.shields.io/readthedocs/tha3.svg?logo=read-the-docs&logoColor=fff&style=flat-square" alt="Documentation Status"> </a> <a href="https://codecov.io/gh/34j/tha3"> <img src="https://img.shields.io/codecov/c/github/34j/tha3.svg?logo=codecov&logoColor=fff&style=flat-square" alt="Test coverage percentage"> </a> </p> <p align="center"> <a href="https://python-poetry.org/"> <img src="https://img.shields.io/badge/packaging-poetry-299bd7?style=flat-square&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAASCAYAAABrXO8xAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAJJSURBVHgBfZLPa1NBEMe/s7tNXoxW1KJQKaUHkXhQvHgW6UHQQ09CBS/6V3hKc/AP8CqCrUcpmop3Cx48eDB4yEECjVQrlZb80CRN8t6OM/teagVxYZi38+Yz853dJbzoMV3MM8cJUcLMSUKIE8AzQ2PieZzFxEJOHMOgMQQ+dUgSAckNXhapU/NMhDSWLs1B24A8sO1xrN4NECkcAC9ASkiIJc6k5TRiUDPhnyMMdhKc+Zx19l6SgyeW76BEONY9exVQMzKExGKwwPsCzza7KGSSWRWEQhyEaDXp6ZHEr416ygbiKYOd7TEWvvcQIeusHYMJGhTwF9y7sGnSwaWyFAiyoxzqW0PM/RjghPxF2pWReAowTEXnDh0xgcLs8l2YQmOrj3N7ByiqEoH0cARs4u78WgAVkoEDIDoOi3AkcLOHU60RIg5wC4ZuTC7FaHKQm8Hq1fQuSOBvX/sodmNJSB5geaF5CPIkUeecdMxieoRO5jz9bheL6/tXjrwCyX/UYBUcjCaWHljx1xiX6z9xEjkYAzbGVnB8pvLmyXm9ep+W8CmsSHQQY77Zx1zboxAV0w7ybMhQmfqdmmw3nEp1I0Z+FGO6M8LZdoyZnuzzBdjISicKRnpxzI9fPb+0oYXsNdyi+d3h9bm9MWYHFtPeIZfLwzmFDKy1ai3p+PDls1Llz4yyFpferxjnyjJDSEy9CaCx5m2cJPerq6Xm34eTrZt3PqxYO1XOwDYZrFlH1fWnpU38Y9HRze3lj0vOujZcXKuuXm3jP+s3KbZVra7y2EAAAAAASUVORK5CYII=" alt="Poetry"> </a> <a href="https://github.com/ambv/black"> <img src="https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square" alt="black"> </a> <a href="https://github.com/pre-commit/pre-commit"> <img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=flat-square" alt="pre-commit"> </a> </p> <p align="center"> <a href="https://pypi.org/project/tha3/"> <img src="https://img.shields.io/pypi/v/tha3.svg?logo=python&logoColor=fff&style=flat-square" alt="PyPI Version"> </a> <img src="https://img.shields.io/pypi/pyversions/tha3.svg?style=flat-square&logo=python&logoColor=fff" alt="Supported Python versions"> <img src="https://img.shields.io/pypi/l/tha3.svg?style=flat-square" alt="License"> </p>A Python package to more easily install pkhungurn/talking-head-anime-3-demo and use Python APIs.
The package ships with the officially distributed seperated_float model (only) by default.
Installation
Install this via pipx or pip (or your favourite package manager):
pipx install tha3
pipx inject tha3 torch --index-url https://download.pytorch.org/whl/cu118 --pip-args="--upgrade"
Usage
tha3 # manual poser
tha3i # iFacialMocap Puppeteer
Contributors ✨
Thanks goes to these wonderful people (emoji key):
<!-- prettier-ignore-start --> <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <table> <tbody> <tr> <td align="center" valign="top" width="14.28%"><a href="https://github.com/34j"><img src="https://avatars.githubusercontent.com/u/55338215?v=4?s=80" width="80px;" alt="34j"/><br /><sub><b>34j</b></sub></a><br /><a href="https://github.com/34j/tha3/commits?author=34j" title="Code">💻</a> <a href="#ideas-34j" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/34j/tha3/commits?author=34j" title="Documentation">📖</a></td> </tr> </tbody> </table> <!-- markdownlint-restore --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END --> <!-- prettier-ignore-end -->This project follows the all-contributors specification. Contributions of any kind welcome!
Demo Code for "Talking Head(?) Anime from A Single Image 3: Now the Body Too"
This repository contains demo programs for the Talking Head(?) Anime from a Single Image 3: Now the Body Too project. As the name implies, the project allows you to animate anime characters, and you only need a single image of that character to do so. There are two demo programs:
- The
manual_poserlets you manipulate a character's facial expression, head rotation, body rotation, and chest expansion due to breathing through a graphical user interface. ifacialmocap_puppeteerlets you transfer your facial motion to an anime character.
Try the Manual Poser on Google Colab
If you do not have the required hardware (discussed below) or do not want to download the code and set up an environment to run it, click to try running the manual poser on Google Colab.
Hardware Requirements
Both programs require a recent and powerful Nvidia GPU to run. I could personally ran them at good speed with the Nvidia Titan RTX. However, I think recent high-end gaming GPUs such as the RTX 2080, the RTX 3080, or better would do just as well.
The ifacialmocap_puppeteer requires an iOS device that is capable of computing blend shape parameters from a video feed. This means that the device must be able to run iOS 11.0 or higher and must have a TrueDepth front-facing camera. (See this page for more info.) In other words, if you have the iPhone X or something better, you should be all set. Personally, I have used an iPhone 12 mini.
Software Requirements
GPU Related Software
Please update your GPU's device driver and install the CUDA Toolkit that is compatible with your GPU and is newer than the version you will be installing in the next subsection.
Python Environment
Both manual_poser and ifacialmocap_puppeteer are available as desktop applications. To run them, you need to set up an environment for running programs written in the Python language. The environment needs to have the following software packages:
- Python >= 3.8
- PyTorch >= 1.11.0 with CUDA support
- SciPY >= 1.7.3
- wxPython >= 4.1.1
- Matplotlib >= 3.5.1
One way to do so is to install Anaconda and run the following commands in your shell:
> conda create -n talking-head-anime-3-demo python=3.8
> conda activate talking-head-anime-3-demo
> conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
> conda install scipy
> pip install wxpython
> conda install matplotlib
Caveat 1: Do not use Python 3.10 on Windows
As of June 2006, you cannot use wxPython with Python 3.10 on Windows. As a result, do not use Python 3.10 until this bug is fixed. This means you should not set python=3.10 in the first conda command in the listing above.
Caveat 2: Adjust versions of Python and CUDA Toolkit as needed
The environment created by the commands above gives you Python version 3.8 and an installation of PyTorch that was compiled with CUDA Toolkit version 11.3. This particular setup might not work in the future because you may find that this particular PyTorch package does not work with your new computer. The solution is to:
- Change the Python version in the first command to a recent one that works for your OS. (That is, do not use 3.10 if you are using Windows.)
- Change the version of CUDA toolkit in the third command to one that the PyTorch's website says is available. In particular, scroll to the "Install PyTorch" section and use the chooser there to pick the right command for your computer. Use that command to install PyTorch instead of the third command above.

Jupyter Environment
The manual_poser is also available as a Jupyter Nootbook. To run it on your local machines, you also need to install:
- Jupyter Notebook >= 7.3.4
- IPywidgets >= 7.7.0
In some case, you will also need to enable the widgetsnbextension as well. So, run
> jupyter nbextension enable --py widgetsnbextension
After installing the above two packages. Using Anaconda, I managed to do the above with the following commands:
> conda install -c conda-forge notebook
> conda install -c conda-forge ipywidgets
> jupyter nbextension enable --py widgetsnbextension
Automatic Environment Construction with Anaconda
You can also use Anaconda to download and install all Python packages in one command. Open your shell, change the directory to where you clone the repository, and run:
> conda env create -f environment.yml
This will create an environment called talking-head-anime-3-demo containing all the required Python packages.
iFacialMocap
If you want to use ifacialmocap_puppeteer, you will also need to an iOS software called iFacialMocap (a 980 yen purchase in the App Store). You do not need to download the paired application this time. Your iOS and your computer must use the same network. For example, you may connect them to the same wireless router.
