FAQ

GRT.FAQ History

Hide minor edits - Show changes to output

June 20, 2013, at 02:34 PM by 18.111.96.209 -
Changed lines 44-46 from:
The GRT has a large number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data. For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. If you have already recorded some labelled data, then you can use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). In the majority of cases however, you will want to record and label your training data on the fly. You can easily do this using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The code below demonstrates how to do this.
to:
The GRT has a large number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data. For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The [[GRT/LabelledClassificationData | LabelledClassificationData]] is the default data structure for labelled classification data, which simply means that each data sample (which is a c++ double vector) has a corresponding label (which is an unsigned integer) that represents the class label (i.e. gesture label) of that specific data sample.

If you have already recorded some labelled data, then you can use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). In the majority of cases however, you will want to record and label your training data on the fly. You can easily do this using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The code below demonstrates how to do this.
June 20, 2013, at 02:31 PM by 18.111.96.209 -
Changed lines 44-46 from:
The GRT has a large number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data.

For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. If you have already recorded some labelled data, then you can use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). In the majority of cases however, you will want to record and label your training data on the fly. You can easily do this using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The code below demonstrates how to do this.
to:
The GRT has a large number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data. For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. If you have already recorded some labelled data, then you can use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). In the majority of cases however, you will want to record and label your training data on the fly. You can easily do this using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The code below demonstrates how to do this.
June 20, 2013, at 01:52 PM by 18.111.96.209 -
Changed lines 55-56 from:
//STEP 2 - Get the realtime data from your sensor and label it with the corresponding gesture it belongs to, you would do this many times for every gesture not just once
to:
//STEP 2 - Get the realtime data from your sensor and label it with the corresponding gesture it belongs to
//NOTE: you would do this many times for every gesture not just once
June 20, 2013, at 01:52 PM by 18.111.96.209 -
Changed lines 46-47 from:
For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]:
(:source lang=C++ wrap=OUTPUT_WIDTH -getcode :) [@//Create a new instance of the LabelledClassificationData
to:
For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. If you have already recorded some labelled data, then you can use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). In the majority of cases however, you will want to record and label your training data on the fly. You can easily do this using the [[GRT/LabelledClassificationData | LabelledClassificationData]]. The code below demonstrates how to do this.

(:source lang=C++ wrap=OUTPUT_WIDTH -getcode :) [@
//STEP 1 - Create a new instance of the LabelledClassificationData and set the dimensionality of your data
Changed line 52 from:
//Set the dimensionality of the data
to:
//Set the dimensionality of the data, in this example our data has 3 dimensions
Changed lines 55-57 from:
//////////////////////////////// THIS CODE WOULD BE PLACED IN THE UPDATE OR DATA LOOP OF YOUR PROGRAM ////////////////////////////////

//Here you would grab some data from your sensor and label it with the corresponding gesture it belongs to
to:
//STEP 2 - Get the realtime data from your sensor and label it with the corresponding gesture it belongs to, you would do this many times for every gesture not just once
Changed lines 65-67 from:
//////////////////////////////// YOU WOULD CALL THE CODE ABOVE MULTIPLE TIMES TO RECORD LOTS OF DATA FROM YOUR SENSOR ////////////////////////////////

//
After recording your training data you can then save it to a file
to:
//Step 3 - After recording your training data you can then save it to a file
Changed line 73 from:
You can also use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). You can find a complete example of how to record, label, manage, save and load training data on the [[GRT/LabelledClassificationData | LabelledClassificationData]] reference page. The reference page also contains a link to the full documentation for the LabelledClassificationData class.
to:
You can find a complete example of how to record, label, manage, save and load training data on the [[GRT/LabelledClassificationData | LabelledClassificationData]] reference page. The reference page also contains a link to the full documentation for the LabelledClassificationData class.
June 20, 2013, at 01:45 PM by 18.111.96.209 -
Changed lines 42-46 from:
Before you can use a gesture-recognition pipeline to recognize your real-time gestures, you need to train the classification or regression algorithm at the core of the pipeline. To train the algorithm you need to record some examples of the gestures you want the pipeline to recognize and then use this training data to train the pipeline.

The
GRT has a number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data.

For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier at the core of the pipeline then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]:
to:
Before you can use any of the GRT algorithms to recognize your real-time gestures, you first need to train a classification model. To train a classification model, you need to record some examples of the gestures you want the classifier to recognize and then use this training data to train the classification model.

The GRT has a large number of [[
GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data.

For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]:
Added lines 53-54:
//////////////////////////////// THIS CODE WOULD BE PLACED IN THE UPDATE OR DATA LOOP OF YOUR PROGRAM ////////////////////////////////
Added lines 64-65:

//////////////////////////////// YOU WOULD CALL THE CODE ABOVE MULTIPLE TIMES TO RECORD LOTS OF DATA FROM YOUR SENSOR ////////////////////////////////
June 20, 2013, at 01:40 PM by 18.111.96.209 -
Added line 3:
* [[#DataSetQuestion | How do I record some training data to train a classifier?]]
Added lines 39-71:

[[#DataSetQuestion]]
!!'''How do I record some training data to train a classifier'''
Before you can use a gesture-recognition pipeline to recognize your real-time gestures, you need to train the classification or regression algorithm at the core of the pipeline. To train the algorithm you need to record some examples of the gestures you want the pipeline to recognize and then use this training data to train the pipeline.

The GRT has a number of [[GRT/Reference |utilities]] and [[GRT/Reference |data structures]] to assist you to record, label, manage, save and load training data.

For example, if you are using an [[GRT/ANBC | ANBC]], [[GRT/KNN | KNN]], [[GRT/GMMClassifier | GMM]] or [[GRT/SVM | SVM]] classifier at the core of the pipeline then you should record your training data using the [[GRT/LabelledClassificationData | LabelledClassificationData]]:
(:source lang=C++ wrap=OUTPUT_WIDTH -getcode :) [@//Create a new instance of the LabelledClassificationData
LabelledClassificationData trainingData;

//Set the dimensionality of the data
trainingData.setNumDimensions( 3 );

//Here you would grab some data from your sensor and label it with the corresponding gesture it belongs to
UINT gestureLabel = 1;
vector< double > sample(3);
sample[0] = //....Data from sensor
sample[1] = //....Data from sensor
sample[2] = //....Data from sensor

//Add the sample to the training data
trainingData.addSample( gestureLabel, sample );

//After recording your training data you can then save it to a file
bool saveResult = trainingData.saveDatasetToFile( "TrainingData.txt" );

//This can then be loaded later
bool loadResult = trainingData.loadDatasetFromFile( "TrainingData.txt" );
@]
where @@[email protected]@ is a GRT type representing an unsigned int.

You can also use the LabelledClassificationData class to load CSV data directly from a file, allowing you to record, label and edit your training data in another program (such as Excel or Matlab). You can find a complete example of how to record, label, manage, save and load training data on the [[GRT/LabelledClassificationData | LabelledClassificationData]] reference page. The reference page also contains a link to the full documentation for the LabelledClassificationData class.
Deleted line 3:
Deleted lines 29-30:
----
Deleted lines 37-38:

----
Changed lines 34-35 from:
!!!'''Can I use the GRT with a Leap?'''
to:
!!'''Can I use the GRT with a Leap?'''
Changed line 45 from:
!!!'''I've developed my own custom feature extraction algorithm and want to use it with the GRT classification algorithms, is this possible?'''
to:
!!'''I've developed my own custom feature extraction algorithm and want to use it with the GRT classification algorithms, is this possible?'''
Changed line 8 from:
!!!'''Can I use the GRT with a Kinect?'''
to:
!!'''Can I use the GRT with a Kinect?'''
Changed line 40 from:
Note that, because of occlusion of key joints by the hand and other fingers etc., you should choose carefully what values you use as input to the GRT. For example, if you wanted to use the GRT to recognize if the user was making a ''right hand swipe gesture'', then it probably makes sense only to use the 3D centroid value for the right hand as input and ignore any finger values from the right hand and also all values from the left hand. One of the main disadvantages of the data that comes from the Leap SDK (at least in the current early versions of the SDK) is that the data often requires extensive filtering from sophisticated tracking algorithms (such as Kalman filters or particle filters) to mitigate occlusions and erroneous sensor estimates.
to:
Note that, because of occlusion of key joints by the hand and other fingers etc., you should choose carefully what values you use as input to the GRT. For example, if you wanted to use the GRT to recognize if the user was making a ''right hand swipe gesture'', then it probably makes sense only to use the 3D centroid value for the right hand as input (and ignore any finger values from the right hand and also all values from the left hand). One of the main disadvantages of the data that comes from the Leap SDK (at least in the early versions of the SDK) is that the data often requires extensive filtering from sophisticated tracking algorithms (such as Kalman filters or particle filters) to mitigate occlusions and erroneous sensor estimates. Without such filtering, it makes recognizing certain gestures difficult as there are frequently samples when not all of the data for the key joints you need are available.
Added lines 5-6:
----
Deleted line 11:
Added lines 31-32:
----
Added lines 41-42:

----
Added line 2:
* [[#LeapQuestion | Can I use the GRT with a Leap?]]
Added lines 29-37:

[[#LeapQuestion]]
!!!'''Can I use the GRT with a Leap?'''

Yes.

The GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the Leap by taking the 3D position and rotation values that the Leap SDK gives you and concatenating these values into one c++ vector which you can then use as input to the GRT.

Note that, because of occlusion of key joints by the hand and other fingers etc., you should choose carefully what values you use as input to the GRT. For example, if you wanted to use the GRT to recognize if the user was making a ''right hand swipe gesture'', then it probably makes sense only to use the 3D centroid value for the right hand as input and ignore any finger values from the right hand and also all values from the left hand. One of the main disadvantages of the data that comes from the Leap SDK (at least in the current early versions of the SDK) is that the data often requires extensive filtering from sophisticated tracking algorithms (such as Kalman filters or particle filters) to mitigate occlusions and erroneous sensor estimates.
Changed lines 2-3 from:
to:
* [[#CustomFeatureQuestion | I've developed my own custom feature extraction algorithm and want to use it with the GRT classification algorithms, is this possible?]]
Changed lines 7-9 from:
Yes, the GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the skeleton data from a Kinect sensor by taking the x, y, and z points from each skeleton joint of a tracked user (or just a few key joints such as the head and hands) and adding the 3D coordinates from each joint to a c++ vector. For example, if you wanted to use the GRT to recognize left hand and right hand gestures then you would do the following somewhere in your code:
to:
Yes. The GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the skeleton data from a Kinect sensor by taking the x, y, and z points from each skeleton joint of a tracked user (or just a few key joints such as the head and hands) and adding the 3D coordinates from each joint to a c++ vector. For example, if you wanted to use the GRT to recognize left hand and right hand gestures then you would do the following somewhere in your code:
Added lines 28-32:

[[#CustomFeatureQuestion]]
!!!'''I've developed my own custom feature extraction algorithm and want to use it with the GRT classification algorithms, is this possible?'''

Yes. One of the main advantages of the GRT design is that you can write your own custom feature-extraction algorithms and easily add them to the GRT framework. The GestureRecognitionPipeline can then call your custom feature-extraction algorithm just as it would call any of the core GRT modules. To do this all you need to do is write a small wrapper class that inherits from the GRT::FeatureExtraction base class. You can then add your custom feature-extraction code to this wrapper class and ''ta-da'', the GRT can now call your algorithm. It is also possible to add your own pre-processing, post-processing, classifier, and regression modules simply by writing a wrapper class for your code that inherits from one of the main GRT base classes.
Deleted line 9:
Changed line 4 from:
!!'''Can I use the GRT with a Kinect?'''
to:
!!!'''Can I use the GRT with a Kinect?'''
Changed line 4 from:
'''Can I use the GRT with a Kinect?'''
to:
!!'''Can I use the GRT with a Kinect?'''
Added line 5:
Changed lines 5-7 from:
Yes, the GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the skeleton data from a Kinect sensor by taking the x, y, and z points from each skeleton joint of a tracked user (or just a few key joints such as the head and hands) and adding the 3D coordinates from each joint to a c++ vector. For example, if you wanted to use the GRT to recognize left hand and right hand gestures then you would do the following:
to:
Yes, the GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the skeleton data from a Kinect sensor by taking the x, y, and z points from each skeleton joint of a tracked user (or just a few key joints such as the head and hands) and adding the 3D coordinates from each joint to a c++ vector. For example, if you wanted to use the GRT to recognize left hand and right hand gestures then you would do the following somewhere in your code:


(:source lang=C++ wrap=OUTPUT_WIDTH -getcode :) [@

//Create a new double vector to hold the left and right hand skeleton data
vector< double > inputVector(6);

//Add the joint data from the Kinect to the inputVector
inputVector[0] = kinectLeftHandData[0];
inputVector[1] = kinectLeftHandData[1];
inputVector[2] = kinectLeftHandData[2];
inputVector[3] = kinectRightHandData[0];
inputVector[4] = kinectRightHandData[1];
inputVector[5] = kinectRightHandData[2];

//You can now input this data into a GestureRecognitionPipeline and use this to recognize your gestures (assuming you have trained the pipeline already)
pipeline.predict( inputVector );

@]

The example code above assumes that you have two arrays containing the Kinect joint data (these arrays have nothing to do with the GRT, this is something you need to write based on whatever API you are using to get the joint data from the Kinect).
Added lines 1-7:
* [[#KinectQuestion | Can I use the GRT with a Kinect?]]

[[#KinectQuestion]]
'''Can I use the GRT with a Kinect?'''
Yes, the GRT gesture-recognition modules can accept any of sensor input, as long as you can get your sensor data into a c++ double vector format. You can therefore get the GRT to work with the skeleton data from a Kinect sensor by taking the x, y, and z points from each skeleton joint of a tracked user (or just a few key joints such as the head and hands) and adding the 3D coordinates from each joint to a c++ vector. For example, if you wanted to use the GRT to recognize left hand and right hand gestures then you would do the following:
October 25, 2012, at 09:40 AM by 18.111.112.77 -
Deleted line 0:
comment3, http://cbcs.asu.edu/ buy adderall, %-PPP,
October 25, 2012, at 05:51 AM by Zlyzrjju - qCZyFmWMqlB
Changed line 1 from:
comment5, http://www.xaviers.edu/ purchase viagra, 8(, http://www.aut.edu/ cialis for sale online, 301015, http://igs.chem.cmu.edu/ levitra, omtuo,
to:
comment3, http://cbcs.asu.edu/ buy adderall, %-PPP,
October 24, 2012, at 02:54 PM by Botklsgu - WSautKOGs
Changed line 1 from:
comment1, http://fligtar.com/ phentermine, 8O, http://www.enfe.net/ buy tramadol, 391812, http://rercapt.org/ stendra, 457,
to:
comment5, http://www.xaviers.edu/ purchase viagra, 8(, http://www.aut.edu/ cialis for sale online, 301015, http://igs.chem.cmu.edu/ levitra, omtuo,
October 24, 2012, at 04:06 AM by Xdkycvaa - UBjeyDNWufuWzFgL
Changed line 1 from:
comment5, http://www.worldlunghealth.org/confLille/ buy green coffee, 8-P, http://www.worldlunghealth.org/confLille/index.php/Partnership/exhibition.html herbal weight loss, >:[[[, http://www.worldlunghealth.org/confLille/index.php/welcome-address.html buy raspberry ketones online, 6463, http://www.worldlunghealth.org/confLille/index.php/Overview/cme.html buy african mango online, 9853, http://www.worldlunghealth.org/confLille/index.php/Overview/venue.html natural male enhancement pills, chtz,
to:
comment1, http://fligtar.com/ phentermine, 8O, http://www.enfe.net/ buy tramadol, 391812, http://rercapt.org/ stendra, 457,
October 21, 2012, at 10:53 AM by Xgxwdzkp - pDugFNOgZvjRzQ
Changed line 1 from:
comment3, http://www.cedocs.it/ acquista viagra, 8)), http://www.comune.lusia.ro.it/ cialis, xyccv, http://comune.portoviro.ro.it/ acquista kamagra, %D,
to:
comment5, http://www.worldlunghealth.org/confLille/ buy green coffee, 8-P, http://www.worldlunghealth.org/confLille/index.php/Partnership/exhibition.html herbal weight loss, >:[[[, http://www.worldlunghealth.org/confLille/index.php/welcome-address.html buy raspberry ketones online, 6463, http://www.worldlunghealth.org/confLille/index.php/Overview/cme.html buy african mango online, 9853, http://www.worldlunghealth.org/confLille/index.php/Overview/venue.html natural male enhancement pills, chtz,
October 19, 2012, at 07:51 PM by Srwhwgig - HcWfZhrRPOACavnW
Changed line 1 from:
comment6, http://www.waccglobal.com/ viagra canada online, =-OO, http://www.parentsvictoria.asn.au/ online pharmacy australia, :-))),
to:
comment3, http://www.cedocs.it/ acquista viagra, 8)), http://www.comune.lusia.ro.it/ cialis, xyccv, http://comune.portoviro.ro.it/ acquista kamagra, %D,
October 18, 2012, at 11:30 AM by Wikjjxje - qunpcUgWPZthiBl
Changed line 1 from:
comment5, http://www.apmf.asn.au/ cialis, %-OO, http://www.colric.org.uk/ viagra uk, 12494, http://biansw.org.au/ levitra, 8(((,
to:
comment6, http://www.waccglobal.com/ viagra canada online, =-OO, http://www.parentsvictoria.asn.au/ online pharmacy australia, :-))),
October 18, 2012, at 02:37 AM by Cpgunuao - RBZEDrFMIf
Changed line 1 from:
comment2, http://chat1.dce.harvard.edu/ cheap viagra, 5124,
to:
comment5, http://www.apmf.asn.au/ cialis, %-OO, http://www.colric.org.uk/ viagra uk, 12494, http://biansw.org.au/ levitra, 8(((,
October 16, 2012, at 01:54 PM by Uvnmgpat - BHgjapUdyJsRzjgbJfT
Changed line 1 from:
comment2, http://apply.indicorps.org/ buy sildenafil, :[, http://www.intelligenic.com/ buy tadalafil online, 320180,
to:
comment2, http://chat1.dce.harvard.edu/ cheap viagra, 5124,
October 15, 2012, at 11:39 PM by Zvlfzuvv - TTXDtyPDtCxJEopgR
Changed line 1 from:
comment4, http://simcoetravelclinic.com/ cialis canada, 29961, http://canadianultimate.com/ viagra, xgbz, http://www.volunteersouthwest.org.au/ cialis, 228109,
to:
comment2, http://apply.indicorps.org/ buy sildenafil, :[, http://www.intelligenic.com/ buy tadalafil online, 320180,
October 10, 2012, at 05:29 AM by Oddxhixo - ygLYIfbw
Changed line 1 from:
comment5, http://lama.edu viagra online, zfdjux, http://fse18.cse.wustl.edu/ cialis, 146475,
to:
comment4, http://simcoetravelclinic.com/ cialis canada, 29961, http://canadianultimate.com/ viagra, xgbz, http://www.volunteersouthwest.org.au/ cialis, 228109,
October 08, 2012, at 10:50 AM by Satjnyop - NHGCiZIysdXOTGmtb
Changed line 1 from:
comment1, http://www.unihome.de/ viagra, tnx, http://android.stanford.edu/ buy cialis online, =D,
to:
comment5, http://lama.edu viagra online, zfdjux, http://fse18.cse.wustl.edu/ cialis, 146475,
October 08, 2012, at 04:17 AM by Rbgeejen - cfLeOWUFBmTkfgoZm
Changed line 1 from:
RsJanJ <a href="http://eqrrhjygjoaf.com/">eqrrhjygjoaf</a>, [url=http://rfssnpatxhxn.com/]rfssnpatxhxn[/url], [link=http://mjrpofgthdtk.com/]mjrpofgthdtk[/link], http://hqhrvomxvhce.com/
to:
comment1, http://www.unihome.de/ viagra, tnx, http://android.stanford.edu/ buy cialis online, =D,
October 06, 2012, at 10:09 PM by kxihjd - spXUUUHjtiI
Added line 1:
RsJanJ <a href="http://eqrrhjygjoaf.com/">eqrrhjygjoaf</a>, [url=http://rfssnpatxhxn.com/]rfssnpatxhxn[/url], [link=http://mjrpofgthdtk.com/]mjrpofgthdtk[/link], http://hqhrvomxvhce.com/