Update #4: Dang those algorithms!

10:06 PM in General by Inez Ang

 

Cracking the gestures
Probably one of the trickiest coding hurdles for us was to figure out how to tell between the different gestures. We first operated on a grid model that tracked the time in which the user takes to pass through the different areas. If more time is spent in one area, then it would be a STILL gesture, crossing more areas in a short amount of time would be a LONG SWIPE. But this code proved to be very cumbersome and overtly tedious to implement.

After a good night’s sleep, we realised a more fluid and accurate model would be to track distance between co-ordinates over time. This way we could dynamic set boundaries and get point accurate co-ordinates for the elements to react to. An array was created to store blob centroids every even frame (for better performance) and time could be calculated by looking at the co-ordinates at specific array positions eg. To find out how far the hand has moved over a second (@30fps), co-ordinates at position 29 and 14 would be compared.

Compensation for those blobs
During development, we decided that we would only focus on crafting the interface for 1 user. After scrutinising the OpenCV API, we discovered that the blobs returned in the blob array are in the order of largest to smallest. To filter out all the other extraneous hands, we would track only the centroid of the first (biggest) blob.

The results of our field test showed a problem with our hand tracking algorithm. If only the hand is in the Kinect’s FOV, the center point of the hand would be accurate. However, if the user stretches across the tank and the arm is in the FOV as well, the center point would end up in the middle of the arm. This offset threw the responsiveness of the system out the door.

Fixing this problem was a laborious task, having to generate data logs and finding patterns amongst all the numbers. Fortunately, there was a consistent correlation between the size of the blob area to the amount of x/y offset and we were able to fine tune our hand tracking.

Click for video