JeelizAR
JavaScript object detection lightweight library for augmented reality (WebXR demos included). It uses convolutional neural networks running on the GPU with WebGL.
Install / Use
/learn @jeeliz/JeelizARREADME
JavaScript/WebGL lightweight object detection and tracking library for WebAR
WARNING: this repository is deprecated and not maintained anymore. Please use WebAR.rocks.object instead.
<p align="center"> <a href='https://youtu.be/a09NSXp_ENU'><img src='https://img.youtube.com/vi/a09NSXp_ENU/0.jpg'></a> <br/> <i><a href='https://jeeliz.com/demos/augmentedReality/demos/threejs/ARCoffee/' target='_blank'>Standalone AR Coffee</a> - Enjoy a free coffee offered by <a href='https://jeeliz.com'>Jeeliz</a>!<br/> The coffee cup is detected and a 3D animation is played in augmented reality.<br/> This demo only relies on JeelizAR and THREE.JS.</i> </p>Table of contents
- Features
- Architecture
- Demonstrations
- Specifications
- Neural network models
- About the tech
- License
- See also
- References
Features
Here are the main features of the library:
- object detection
- webcam video feed capture using a helper
- on the fly neural network change
- demonstrations with WebXR integration
Architecture
/demos/: source code of the demonstrations,/dist/: heart of the library:jeelizAR.js: main minified script,
/helpers/: scripts which can help you to use this library in some specific use cases (like WebXR),/libs/: 3rd party libraries and 3D engines used in the demos,/neuralNets/: neural network models.
Demonstrations
These are some demonstrations of this library. Some requires a specific setup.
You can subscribe to the Jeeliz Youtube channel or to the @StartupJeeliz Twitter account to be kept informed of our cutting edge developments.
If you have made an application or a fun demonstration using this library, we would love to see it and insert a link here! Contact us on Twitter @StartupJeeliz or LinkedIn.
Standard browser demos
These demonstrations work in a standard web browser. They only require webcam access.
- Simple object recognition using the webcam (for debugging): live demo source code
- Cat recognition (displayed as header of https://jeeliz.com for desktop computers only): live demo source code Youtube video
- THREE.js Sprite 33cl (12oz) can detection demo: source code live demo
- Standalone AR Coffee demo: source code live demo Youtube video
- Amazon Sumerian Sprite 33cl (12oz) detection demo: source code live demo Youtube video on mobile
WebXR viewer demos
To run these demonstrations, you need a web browser implementing WebXR. We hope it will be implemented soon in all web browsers!
- If you have and IOS device (Ipad, Iphone), you can install WebXR viewer from the Apple store. It is developped by the Mozilla Fundation. It is a modified Firefox with WebXR implemented using ArKit. You can then open the demonstrations from the URL bar of the application.
- For Android devices, it should work with WebARonARCore, but we have not tested yet. Your device should still be compatible with ARCore.
Then you can run these demos:
- WebXR object labelling: live demo source code
- WebXR coffee: live demo source code Youtube video
8thWall demos
These demos run in a standard web browser on mobile or tablet. They rely on the amazing 8th Wall AR engine. We use the web version of the engine and we started from the THREE.JS web sample. The web engine is not released publicly yet, so you need to:
- host this repository using a local HTTPS server,
- get an API key for the web SDK from 8th wall (subscribe and ask for an access),
- write the key in the
index.htmlof the demo you want to try (search and replace<WEBAPPKEY>by your real key), - you need to validate the specific device using a QR code or a link (it is very well explained in the 8th wall get started document).
The demo:
- AR Coffee: source code Youtube video
Specifications
Get started
The most basic integration example of this library is the first demo, the debug detection demo.
In index.html, we include in the <head> section the main library script, /dist/jeelizAR.js, the MediaStramAPI (formerly called getUserMedia API) helper, /helpers/JeelizMediaStreamAPIHelper.js and the demo script, demo.js:
<script src = "../../dist/jeelizAR.js"></script>
<script src = "../../helpers/JeelizMediaStreamAPIHelper.js"></script>
<script src = "demo.js"></script>
In the <body> section of index.html, we put a <canvas> element which will be used to initialize the WebGL context used by the library for deep learning computation, and to possibly display a debug rendering:
<canvas id = 'debugJeeARCanvas'></canvas>
Then, in demo.js, we get the Webcam video feed after the loading of the page using the MediaStream API helper:
JeelizMediaStreamAPIHelper.get(DOMVIDEO, init, function(){
alert('Cannot get video bro :(');
}, {
video: true //mediaConstraints
audio: false
})
You can replace this part by a static video, and you can also provide Media Contraints to specify the video resolution.
When the video feed is captured, the callback function init is launched. It initializes this library:
function init(){
JEEARAPI.init({
canvasId: 'debugJeeARCanvas',
video: DOMVIDEO,
callbackReady: function(errLabel){
if (errLabel){
alert('An error happens bro: ',errLabel);
} else {
load_neuralNet();
}
}
});
}
The function load_neuralNet loads the neural network model:
function load_neuralNet(){
JEEARAPI.set_NN('../../neuralNets/basic4.json', function(errLabel){
if (errLabel){
console.log('ERROR: cannot load the neural net', errLabel);
} else {
iterate();
}
}, options);
}
Instead of giving the URL of the neural network, you can also give the parsed JSON object.
The function iterate starts the iteration loop:
function iterate(){
var detectState = JEEARAPI.detect(3);
if (detectState.label){
console.log(detectState.label, 'IS DETECTED YEAH !!!');
}
window.requestAnimationFrame(iterate);
}
Initialization arguments
The JEEARAPI.init takes a dictionary as argument with these properties:
<video> video: HTML5 video element (can come from the MediaStream API helper). Iffalse, update the source texture from avideoFrameBuffer objectprovided when callingJEEARAPI.detect(...)(like in WebXR demos),<dict> videoCrop: see Video crop section for more details<function> callbackReady: callback function launched when ready or if there was an error. Called with the error label orfalse,<string> canvasId: id of the canvas from which the WebGL context used for deep learning processing will be created,<canvas> canvas: ifcanvasIdis not provided, you can also provide directly the<canvas>element<dict> scanSettings: see Scan settings section for more details<boolean> isDebugRender: Boolean. If true, a debug rendering will be displayed on the<canvas>element. Useful for debugging, but it should be set tofalsefor production because it wastes GPU computing resources,<int> canvasSize: size of the detection canvas in pixels (should be square). Special value-1keep the canvas size. Default:512.<boolean> followZRot: only works with neural network models outputing pitch, roll and yaw angles. Crop the input window using the roll of the current detection during the tracking stage,[<float>, <float>] ZRotRange: only works iffollowZRot = true. Randomize initial rotation angle. Values are in radians. Default:[0,0].
The Detection function
The function which triggers the detection is JEEARAPI.detect(<int>nDetectionsPerLoop, <videoFrame>frame, <dictionary>options).
<int> nDetectionPerLoopis the number of consecutive detections proceeded. The higher it is, the faster the detectio
