Gestures are an important form of non-verbal communication between humans and can also be used to create interfaces between humans and machines. There are several types of gestures including emblems, sign languages, gesticulation and pantomimes. Gesture recognition allows humans to interact with computers through motions of the body, especially hand movements. Some methods of gesture recognition include device-based techniques using sensors on gloves, vision-based techniques using cameras, and controller-based techniques using motion controllers. Gesture recognition has applications in areas such as virtual controllers, sign language translation, game interaction and robotic assistance.
2. Gestures are an important aspect of human
interaction, both interpersonally and in the
context of man-machine interfaces.
A gesture is a form of non-verbal communication
in which visible bodily actions communicate
particular messages, either in place of speech or
together and in parallel with words.
Gestures include movement of the hands, face, or
other parts of the body.
3. Gesticulation:-
Spontaneous movements of the hands and arms that accompany
speech.
Language-like gestures:-
Gesticulation that is integrated into a spoken utterance, replacing a
particular spoken word or phrase.
Pantomimes:-
Gestures that depict objects or actions, with or without
accompanying speech.
Emblems:-
Familiar gestures such as V for victory, thumbs up, and assorted
rude gestures.
Sign languages.:-
Linguistic systems, such as American Sign Language, which are
well defined.
4.
What is
Gesture Recognition ?
Interface with computers using gestures of
the human body, typically hand movements.
Gesture recognition is an important skill for
robots that work closely with humans.
Gesture recognition is especially valuable in
applications involving interaction
human/robot for several reasons.
6. Hand gesture recognition
is one obvious way to
create a useful, highly
adaptive interface
between machines and
their users.
Hand gesture recognition
technology would allow
for the operation of
complex machines using
only a series of finger
and hand movements,
eliminating the need for
physical contact between
operator and machine.
7. Facial gesture recognition
is another way of creating
an effective non-contact
interface between users
and their machines.
The goal of facial gesture
recognition is for machines
to effectively understand
emotions and other
communication cues within
humans, regardless of the
countless physical
differences between
individuals.
8. Sign language
recognition is one of
the most promising
sub-fields in gesture
recognition research.
Effective sign
language recognition
would grant the deaf
and hard-of-hearing
expanded tools for
communicating with
both other people and
machines.
10. Device-based
techniques use a glove,
stylus, or other position
tracker, whose
movements send
signals that the system
uses to identify the
gesture.
The glove is equipped
with a variety of sensors
to provide information
about hand position,
orientation, and flex of
fingers.
11. There are two approaches
to vision based gesture
recognition:
Model based
techniques:
They try to create a three
dimensional model of the
users hand and use this for
recognition.
Image based methods:
Image-based techniques
detect a gesture by
capturing pictures of a
user’s motions during the
course of a gesture.
12. These can provide input to the computer
about the position and rotation of the hands
using magnetic or inertial tracking devices.
This uses fiber optic cables running down
the back of the hand. Light pulses are
created and when the fingers are bent, light
leaks through small cracks and the loss is
registered, giving an approximation of the
hand pose.
Wired gloves:-
13. A Stereo camera is a camera that has
two lenses about the same distance
apart as your eyes and takes two
pictures at the same time. This
simulates the way we actually see
and therefore creates the 3D effect
when viewed.
Stereo cameras:-
14. Using specialized cameras such
as structured light or time-of-flight
cameras, one can generate a depth
map of what is being seen through
the camera at a short range, and use
this data to approximate a 3d
representation of what is being
seen.
These can be effective for detection
of hand gestures due to their short
range capabilities.
Depth-aware cameras.
15. These controllers act as an extension of the body so that when
gestures are performed, some of their motion can be
conveniently captured by software.
Mouse gestures are one such example, where the motion of
the mouse is correlated to a symbol being drawn by a
person's hand.
Controller –based gestures:-
16. A normal camera can be used for gesture recognition where the
resources/environment would not be convenient for other forms of image-
based recognition.
Earlier it was thought that single camera may not be as effective as stereo
or depth aware cameras, but a start-up based out of Palo Alto
named Flutter is challenging this theory. It has released an app that could
be downloaded to by any windows/mac computer with built-in webcam.
Single camera:-
18. 3D model-based algorithms:-
A real hand (left) is interpreted as a collection of
vertices and lines in the 3D mesh version (right),
and the software uses their relative position and
interaction in order to infer the gesture.
Skeletal based algorithms:-
The skeletal version (right) is effectively modelling
the hand (left). This has fewer parameters than the
volumetric version and it's easier to compute,
making it suitable for real-time gesture analysis
systems
Appearance based models:-
These binary silhouette(left) or contour(right)
images represent typical input for appearance-
based algorithms. They are compared with
different hand templates and if they match, the
correspondent gesture is inferred.
19. Socially assistive robotics:-
Sign language
recognition:-
By using proper sensors worn on the body of a
patient and by reading the values from those
sensors, robots can assist in patient
rehabilitation. The best example can be stroke
rehabilitation.
Just as speech recognition can
transcribe speech to text,
certain types of gesture
recognition software can
transcribe the symbols
represented through sign
language into text.
20. Virtual controllers:-
Remote control:-
Through the use of gesture
recognition, remote control with the
wave of a hand of various devices is
possible.
For systems where the act of finding or
acquiring a physical controller could require too
much time, gestures can be used as an
alternative control mechanism. Controlling
secondary devices in a car, or controlling a
television set are examples of such usage.
21. Control through facial gestures:-
Immersive game
technology:-
Gestures can be used to control
interactions within video games to
try and make the game player's
experience more interactive or
immersive.
Controlling a computer through facial gestures
is a useful application of gesture recognition for
users who may not physically be able to use a
mouse or keyboard. Eye tracking in particular
may be of use for controlling cursor motion or
focusing on elements of a display.
22.
23.
24. 1.Latency
Image processing can be significantly slow creating unacceptable latency for video
games and other similar applications.
2.Lack of Gesture Language
Different users make gestures differently, causing difficulty in identifying motions.
3.Robustness
Many gesture recognition systems do not read motions accurately or optimally due to
factors like insufficient background light, high background noise etc.
4.Performance
Image processing involved in gesture recognition is quite resource intensive and the
applications may found difficult to run on resource constrained devices.