SkillAgentSearch skills...

LeapTrainer.js

(v0.31) Gesture and pose learning and recognition for the Leap Motion

Install / Use

/learn @roboleary/LeapTrainer.js
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

LeapTrainer.js v0.31

A gesture and pose learning and recognition framework for the Leap Motion.

v0.3 adds the capability to learn and recognize motionless poses, as seen in the new video below.

For full details of the new features and fixes, take a look at the release notes.

Below is a video of the LeapTrainer UI learning and then recognizing some movements and poses. An online demo of the UI can be found right here. The previous v0.2 demo video can still be found right here.

ScreenShot

This framework currently supports high- and low- resolution gesture encoding and geometric template matching, cross-correlation, and neural network-based gesture and pose recognition.

It is intended that developers use this framework to explore alternative and improved capture and recognition algorithms.

All contributions are welcome - the best known implementation of each of the core framework functions will be used as the framework default. Currently these are:

  • Gesture recording triggered by frame velocity
  • 3D geometric positioning capture for simple gesture recording
  • Geometric template matching for gesture recognition

Table of contents

What are gestures and poses?

The Leap Motion provides a mechanism to track hand and fingertip movement within a space.

LeapTrainer.js builds on top of this by providing a simple API to record and later recognise hand movements and positions within this space. This recognition capability can be easily integrated into new applications to help build motion interfaces.

A gesture is a hand movement with a recognizable start and end - for example, a wave, a tapping motion, a swipe right or left.

A pose is a hand position that is held motionless for a few moments - for example, holding up some fingers to indicate a number, pointing, or making a stop sign.

The difference in how LeapTrainer recognizes gestures as opposed to poses is that gesture recognition starts when hand movement suddenly speeds up, and ends when it slows down (or stops). So a quick karate chop will trigger gesture recognition. Pose recognition, on the other hand, starts when movement suddenly stops and remains more or less unchanged for a short period - so just holding a thumbs-up for a moment or two will trigger pose recognition.

Usage

First, you'll need a Leap Motion connected to the machine you're running your browser on. The Leap monitors movement and transmits data to the browser via the Leap Motion Javascript API.

This data is then analysed by the LeapTrainer framework and used to learn gestures and fire events when known gestures and poses are detected.

To use the framework, include the Leap Motion Javascript API and leaptrainer.js in your HTML page:

<script src="http://js.leapmotion.com/0.2.0/leap.min.js"></script>

<script src="/path/to/leaptrainer.min.js"></script>

Then create a LeapTrainer controller object:

var trainer = new LeapTrainer.Controller();

This object will internally create a LeapMotion.Controller with which to communicate with the connected device. If you prefer to create your own Leap.Controller object, you can pass one as a parameter to the constructor - just make sure to call the connect function once the trainer is created.

var leapController = new Leap.Controller();

var trainer = new LeapTrainer.Controller({controller: leapController});

leapController.connect();

The LeapTrainer controller constructor also accepts a set of configuration variables.

For developers interested in developing new recording or recognition algorithms, the LeapTrainer controller can be easily sub-classed.

Once a LeapTrainer controller is created it can be used to train new gestures, receive events when known gestures are detected, and import and export gestures.

Training the system

A new gesture or pose can be created like this:

trainer.create('Halt');

By default, calling the create function will switch the controller into training mode. If a second parameter is passed this will cause the controller to just store the name without moving directly into training.

trainer.create('Halt', true); // The controller won't switch to training mode

While in training mode LeapTrainer will watch for a configured number of training gestures, and once enough input has been gathered 'Halt' will be added to the list of known gestures and poses.

If a learning algorithm is being used that requires an initialization period (for example, neural network-based recognition) then a training completion event will fire once the algorithm has been fully initialized.

The LeapTrainer UI can be used as an interface for training the system.

Receiving events when movements are recognized

Once a gesture or pose has been learned LeapTrainer will fire an event containing the movement's name whenever it recognizes it again.

Components can register to receive these events using the on() function:

trainer.on('Halt', function() { console.log('Stop right there!'); });

Previously registered listeners can unregister themselves using the off() function:

trainer.off('Halt', registeredFunction);

Importing and exporting from LeapTrainer

Gestures and poses can be exported from LeapTrainer for persistence or transport using the toJSON() function, which accepts a name as a parameter:

var savedGesture = trainer.toJSON('Halt');

Previously exported gestures and poses can be imported into a LeapTrainer controller using the fromJSON() function:

trainer.fromJSON(savedData);

Sub-classes of the controller may implement alternative export formats. By default the JSON exported just contains the name of the gesture, a flag indicating if it's a gesture or a pose, and the stored training data - something like this:

{"name":"Halt", "pose":"false", "data":[[1.940999984741211,8.213000297546387,...'}

Since the training data format may change between controller sub-classes, it is not necessarily true that gestures exported from one LeapTrainer Controller sub-class will be compatible with another. For example, the neural network controller adds an encoded trained neural network to the export format so that networks don't need to be re-trained on import.

Options

Options can be passed to the LeapTrainer.Controller constructor like so:

new LeapTrainer.Controller({minRecordingVelocity: 100, downtime: 100});

Some options apply to the default implementations of functions, and may be removed or redundant in sub-classes of the LeapTrainer controller.

  • controller : An instance of Leap.Controller class from the Leap Motion Javascript API. This will be created with default settings if not passed as an option.

  • pauseOnWindowBlur : If this variable is TRUE, then the LeapTrainer Controller will pause when the browser window loses focus, and resume when it regains focus (default: FALSE)

  • minRecordingVelocity: The minimum velocity a frame needs to be measured at in order to trigger gesture recording. Frames with a velocity below this speed will cause gesture recording to stop. Frame velocity is measured as the fastest moving hand or finger tip in view (default: 300)

  • maxRecordingVelocity: The maximum velocity a frame can measure at and still trigger pose recording, or above which to pose recording will be stopped (default: 30)

  • minGestureFrames: The minimum number of frames that can contain a recognisable gesture (default: 5)

  • minPoseFrames: The minimum number of frames that need to hit as recordable before pose recording is actually triggered. This higher this number, the longer a pose needs to be held in position before recognition will be attempted. (default: 75)

  • hitThreshold: The return value of the recognition function above which a gesture is considered recognized. Raise this to make gesture recognition more strict (default: 0.7)

  • trainingCountdown: The number of seconds after startTraining is called that training begins. This number of training-countdown events will be emit. (default: 3)

  • trainingGestures: The number of training gestures required to be performed in training mode (default: 1)

  • convolutionFactor: The factor by which training samples will be convolved over a gaussian distribution in order to expand the input training data. Set this to zero to disable convolution (default: 0)

  • downtime: The number of milliseconds after a gesture is identified

View on GitHub
GitHub Stars370
CategoryEducation
Updated6mo ago
Forks86

Languages

JavaScript

Security Score

72/100

Audited on Sep 23, 2025

No findings