Posted by Tharika Wanninayaka on Monday, March 16, 2020 | No comments
First we capture the sign gesture using web camera and then apply the
Gaussian blur filter to the captured image to reduce the noise. After that, we convert the image from RGB to
gray-scale. Then we try to locate the
extract features in the hand using OTSU thresholding. From the above step, a
function detect Multi-Scale returns 4 values x-coordinate, y-coordinate,
width(w) and height(h) of the detected feature of the hand. Based on these 4
values draw a rectangle around the hand. Finally we can see the pre-processed image.Below images show some pre-processed gestures.
0 comments:
Post a Comment