Wav2Lip
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
Install / Use
/learn @Rudrabha/Wav2LipREADME
Wav2Lip: Accurately Lip-syncing Videos In The Wild
Commercial Version
Create your first lipsync generation in minutes. Please note, the commercial version is of a much higher quality than the old open source model!
Create your API Key
Create your API key from the Dashboard. You will use this key to securely access the Sync API.
Make your first generation
The following example shows how to make a lipsync generation using the Sync API.
Python
Step 1: Install Sync SDK
pip install syncsdk
Step 2: Make your first generation
Copy the following code into a file quickstart.py and replace YOUR_API_KEY_HERE with your generated API key.
# quickstart.py
import time
from sync import Sync
from sync.common import Audio, GenerationOptions, Video
from sync.core.api_error import ApiError
# ---------- UPDATE API KEY ----------
# Replace with your Sync.so API key
api_key = "YOUR_API_KEY_HERE"
# ----------[OPTIONAL] UPDATE INPUT VIDEO AND AUDIO URL ----------
# URL to your source video
video_url = "https://assets.sync.so/docs/example-video.mp4"
# URL to your audio file
audio_url = "https://assets.sync.so/docs/example-audio.wav"
# ----------------------------------------
client = Sync(
base_url="https://api.sync.so",
api_key=api_key
).generations
print("Starting lip sync generation job...")
try:
response = client.create(
input=[Video(url=video_url),Audio(url=audio_url)],
model="lipsync-2",
options=GenerationOptions(sync_mode="cut_off"),
outputFileName="quickstart"
)
except ApiError as e:
print(f'create generation request failed with status code {e.status_code} and error {e.body}')
exit()
job_id = response.id
print(f"Generation submitted successfully, job id: {job_id}")
generation = client.get(job_id)
status = generation.status
while status not in ['COMPLETED', 'FAILED']:
print('polling status for generation', job_id)
time.sleep(10)
generation = client.get(job_id)
status = generation.status
if status == 'COMPLETED':
print('generation', job_id, 'completed successfully, output url:', generation.output_url)
else:
print('generation', job_id, 'failed')
Run the script:
python quickstart.py
Step 3: Done!
It may take a few minutes for the generation to complete. You should see the generated video URL in the terminal post completion.
TypeScript
Step 1: Install dependencies
npm i @sync.so/sdk
Step 2: Make your first generation
Copy the following code into a file quickstart.ts and replace YOUR_API_KEY_HERE with your generated API key.
// quickstart.ts
import { SyncClient, SyncError } from "@sync.so/sdk";
// ---------- UPDATE API KEY ----------
// Replace with your Sync.so API key
const apiKey = "YOUR_API_KEY_HERE";
// ----------[OPTIONAL] UPDATE INPUT VIDEO AND AUDIO URL ----------
// URL to your source video
const videoUrl = "https://assets.sync.so/docs/example-video.mp4";
// URL to your audio file
const audioUrl = "https://assets.sync.so/docs/example-audio.wav";
// ----------------------------------------
const client = new SyncClient({ apiKey });
async function main() {
console.log("Starting lip sync generation job...");
let jobId: string;
try {
const response = await client.generations.create({
input: [
{
type: "video",
url: videoUrl,
},
{
type: "audio",
url: audioUrl,
},
],
model: "lipsync-2",
options: {
sync_mode: "cut_off",
},
outputFileName: "quickstart"
});
jobId = response.id;
console.log(`Generation submitted successfully, job id: ${jobId}`);
} catch (err) {
if (err instanceof SyncError) {
console.error(`create generation request failed with status code ${err.statusCode} and error ${JSON.stringify(err.body)}`);
} else {
console.error('An unexpected error occurred:', err);
}
return;
}
let generation;
let status;
while (status !== 'COMPLETED' && status !== 'FAILED') {
console.log(`polling status for generation ${jobId}...`);
try {
await new Promise(resolve => setTimeout(resolve, 10000));
generation = await client.generations.get(jobId);
status = generation.status;
} catch (err) {
if (err instanceof SyncError) {
console.error(`polling failed with status code ${err.statusCode} and error ${JSON.stringify(err.body)}`);
} else {
console.error('An unexpected error occurred during polling:', err);
}
status = 'FAILED';
}
}
if (status === 'COMPLETED') {
console.log(`generation ${jobId} completed successfully, output url: ${generation?.outputUrl}`);
} else {
console.log(`generation ${jobId} failed`);
}
}
main();
Run the script:
npx tsx quickstart.ts -y
Step 3: Done!
You should see the generated video URL in the terminal.
Next Steps
Well done! You've just made your first lipsync generation with sync.so!
Ready to unlock the full potential of lipsync? Dive into our interactive Studio to experiment with all available models, or explore our API Documentation to take your lip-sync generations to the next level!
Contact
- prady@sync.so
- pavan@sync.so
- sanjit@sync.so
Non Commercial Open-source Version
This code is part of the paper: A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild published at ACM Multimedia 2020.
|📑 Original Paper|📰 Project Page|🌀 Demo|⚡ Live Testing|📔 Colab Notebook
|:-:|:-:|:-:|:-:|:-:|
Paper | Project Page | Demo Video | Interactive Demo | Colab Notebook /Updated Collab Notebook
Highlights
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy :100:. Try our interactive demo.
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available :boom:
- Or, quick-start with the Google Colab Notebook: Link. Checkpoints and samples are available in a Google Drive folder as well. There is also a tutorial video on this, courtesy of What Make Art. Also, thanks to Eyal Gruss, there is a more accessible Google Colab notebook with more useful features. A tutorial collab notebook is present at this link.
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [
evaluation/folder of this repo] released. Instructions to calculate the metrics reported in the paper are also present.
Disclaimer
All results from this open-source code or our demo website should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibited. For commercial requests please contact us directly! Prerequisites
Python 3.6- ffmpeg:
sudo apt-get install ffmpeg - Install necessary packages using
pip install -r requirements.txt. Alternatively, instructions for using a docker image is provided here. Have a look at this comment and comment on the gist if you encounter any issues. - Face detection pre-trained model should be downloaded to
face_detection/detection/sfd/s3fd.pth. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoo
