WebAR.rocks.face
Lightweight WebGL and JavaScript library for real time, on device face detection, tracking and landmark estimation. Ideal for web based augmented reality face filters and virtual try on π
Install / Use
/learn @WebAR-rocks/WebAR.rocks.faceREADME
JavaScript/WebGL lightweight and robust face tracking library based on landmark detection and tracking
This JavaScript library detects and tracks the face in real time from the camera video feed captured with MediaStream API. Then it is possible to overlay 3D content for augmented reality applications. This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if a face is detected or not, the position and the scale of the detected face and the rotation Euler angles.
Facial landmarks positions are also among the neuron network outputs. There is still a balance between the number of detected keypoints and the accuracy/weights of the neuron network: the fewer keypoints, the best is the detection accuracy because the neuron network can be more focused.
Table of contents
Features
Here are the main features of the library:
- face detection,
- face tracking,
- face rotation detection,
- facial landmark detection,
- multiple faces detection and tracking,
- very robust for all lighting conditions,
- video acquisition with HD video ability,
- mobile friendly.
Architecture
/demos/: source code of the demonstrations, sorted by 2D/3D engine used,/dist/: core of the library:WebARRocksFace.js: main minified script,WebARRocksFace.module.js: main minified script for module use (withimportorrequire),
/helpers/: scripts which can help you to use this library in some specific use cases,/neuralNets/: neural networks models,/libs/: 3rd party libraries and 3D engines used in the demos,/reactThreeFiberDemos: Demos with Vite/NPM/React/Three Fiber,/blenderPluginFlexibleMaskExporter: Blender plugin to export the metadata JSON file used in the flexibleMask2 demo./VTO4Sketchfab: Integration with Sketchfab 3D viewer
Demonstrations
Included in this repository
The best demos have been ported to a modern front-end development environment (NPM / Vite / React / Three Fiber / ES6) in the /reactThreeFiberDemos directory. This is a standalone directory.
Here are the static JavaScript demos:
-
Debug and test views:
- basic debug view (displays the face landmarks): live demo, source code
- advanced debug view: live demo, source code
- expressions detection debug view: live demo, source code
-
Accessories virtual try-on:
- earrings VTO 2D: live demo, source code
- earrings VTO 3D: live demo, source code
- glasses VTO: live demo, source code and specific documentation,
- headphones/helmet VTO: live demo, source code
- hat VTO: live demo, source code
- necklace VTO: live demo, source code
-
Flexible masks:
- 3D flexible mask 2: live demo, source code
- 3D flexible mask using a skeleton (autobones): live demo, source code
-
Makeup:
- makeup lipstick VTO: live demo, source code
- makeup shapes based VTO: live demo, source code
- makeup texture based VTO: live demo, source code
- sport makeup: live demo, source code
-
Misc:
- Background removal: live demo, source code
- GIF Face replacement: live demo, source code
They trust us
Jam.gg: The best social online gaming platform Jam.gg (formerly Piepacker), with more than 5 million users worldwide, relies on this library to add amazing 3D masks and face filters in augmented reality to its users. To test it, subscribe or log-in, select a game, create or join a gaming room and select a mask.
Kinder: Applaydu, an educative mobile application published by Kinder, relies on WebAR.rocks.face face detection and tracking library for augmented reality face masks. This application is developped by Gameloft in collaboration with the University of Oxfordβs Department of Education. It is released both for iOS and Android. Just for Android it has been downloaded more than 10 million times. More information and download link are on Kinder Official website.
Franky's hat: Franky's hat relies on this library for hats virtual try-on. You can check it out from Franky's hat website, then click on TRY IN AR button.
Specifications
Get started
The best way to get started is to take a look at our boilerplate demo. It uses some handful helpers from /helpers path. Here we describe the initialization of the core library without the helpers. But we strongly advise to use them.
On your HTML page, you first need to include the main script between the tags <head> and </head>:
<script src="dist/WebARRocksFace.js"></script>
Then you should include a <canvas> HTML element in the DOM, between the tags <body> and </body>. The width and height properties of the <canvas> element should be set. They define the resolution of the canvas and the final rendering will be computed using this resolution. Be careful to not enlarge too much the canvas size using its CSS properties without increasing its resolution, otherwise it may look blurry or pixelated. We advise to fix the resolution to the actual canvas size. Do not forget to call WEBARROCKSFACE.resize() if you resize the canvas after the initialization step. We strongly encourage you to use our helper /helpers/WebARRocksResizer.js to set the width and height of the canvas (see Optimization/Canvas and video resolutions section).
<canvas width="600" height="600" id='WebARRocksFaceCanvas'></canvas>
This canvas will be used by WebGL both for the computation and the 3D rendering. When your page is loaded you should launch this function:
WEBARROCKSFACE.init({
canvasId: 'WebARRocksFaceCanvas',
NNCPath: '../../../neuralNets/NN_FACE_0.json', // neural network model
callbackReady: function(errCode, spec){
if (errCode){
console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
return;
}
[init scene with spec...]
console.log('INFO: WEBARROCKSFACE IS READY');
}, //end callbackReady()
// called at each render iteration (drawing loop)
callbackTrack: function(detectState){
// render your scene here
[... do something with detectState]
} //end callbackTrack()
});//end init call
Optional init arguments
<integer> maxFacesDetected: Only for multiple face detection - maximum number of faces which can be detected and tracked. Should be between1(no multiple detection) and8. See Multiple face section for more details,<integer> animateDelay: With this statement you can set accurately the number of milliseconds during which the browser wait at the end of the rendering loop before starting another detection. If you use the canvas of this library as a secondary element (for example in PACMAN or EARTH NAVIGATION demos) you should set a smallanimateDelayvalue (for example 2 milliseconds) in order to avoid rendering lags.<function> onWebcamAsk: Function launched just before asking for the user to allow its camera access,<function> onWebcamGet: Function launched just after the user has accepted to share its video. It is called with the video element as argument,<dict> videoSettings: override MediaStream API specified video settings, which are by default:
{
'videoElement' // not set by default. <video> element used
// If you specify this parameter,
// all other settings will be useless
// it means that you fully handle the video aspect
'deviceId' // not set by default
'facingMode': 'user', // to use the rear camera, set to 'environment'
'idealWidth': 800, // ideal video width in pixels
'idealHeight': 600, // ideal video height in pixels
'minWidth': 480, // min video width in pixels
'maxWidth': 1280, // ma
