Gesture Recognition Toolkit
The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++ machine learning library that has been specifically designed for real-time gesture recognition.
The GRT has been designed to:
- be easy to use and integrate into your existing c++ projects
- be compatible with any type of sensor or data input
- be easy to rapidly train with your own gestures
- be easy to extend and adapt with your own custom processing or feature extraction algorithms (if needed)
The GRT features a large number of algorithms that can be used to:
- recognize static postures (such as if a user has their hands in a specific posture or if a device fitted with an accelerometer is being held in a distinct orientation)
- recognize dynamic temporal gestures (such as a swipe or tap gesture)
- perform regression (i.e. continually map an input signal to an output signal, such as mapping the angle of a user's hands to the angle a steering wheel should be turned in a driving game)
The GRT currently works across several operating systems including:
- Windows (Tested on Windows XP, Windows 7)
- OS X (Tested on 10.7)
- Linux (Tested on Ubuntu 12)
Some example code demonstrating how to create a new gesture recognition pipeline, add a pre-processing modules, two feature-extraction modules, an ANBC classifier module, and a post-processing module, load some training and test data, and then train and test the pipeline.
GRT Machine Learning Algorithms
The current development build of the GRT contains machine-learning algorithms such as:
GRT Supporting Algorithms
In addition to the machine-learning algorithms, the GRT also contains a large number of pre-processing, post-processing, and feature-extraction algorithms such as:
The GRT has not yet been officially released but a development version can be downloaded from googlecode: GRT and also on the GRT Wiki.
You can find the main GRT Wiki here, which includes tutorials, code examples, and instuctions on how to download and integrate the GRT in your own c++ projects.
The wiki also features a guide to getting started if you've never used machine learning or gesture recognition before.