VISION BASED STATIC HAND GESTURE RECOGNITION USING KOHONEN Self Organizing Maps (SOM) Neural Network  in NVIDIA CUDA


GPU programming (CS525)- Fall 08
by  RAJAT MAHAJAN
Instructor : Prof. Andrew Johnson

INTRODUCTION

Vision-based automatic hand gesture recognition has been a very active research topic in recent years with motivating applications such as human computer interaction (HCI), robot control, and sign language interpretation. The general problem is quite challenging due a number of issues including the complicated nature of static and dynamic hand gestures, complex backgrounds, and occlusions. Attacking the problem in its generality requires elaborate algorithms requiring intensive computer resources.
In this project I would be implementing neural network based static hand gesture (hand posture) recognition for robot control in CUDA. The motivation for this project comes from [2], which deals with a robot navigation problem; in which  a robot is controlled by hand pose signs given by a human. Due to real-time operational requirements, it should be a computationally efficient algorithm.
I already implemented the approach described in the above paper in MATLAB.
The approach is fast but not that accurate. So, I thought that it would be a good idea to implement gesture recognition using neural networks, so as to increase the accuracy. But neural networks require a considerable number of vector and matrix operations, especially when images are involved, which means a considerable processing time. So, implementing them in a parallel programming model such as CUDA [4] would provide the necessary gain in processing speed.  A detailed description about neural networks and their implementation on a GPU can be found in [2]. Thus I have a gesture recognition approach which is fast and accurate.

Make a free website with Yola