Project Report, Design Project 2 - ICT Intervention for Improvisation of Maternal Healthcare in Assam
1. Indian Institute of Technology Guwahati
ICT Intervention for
Improvisation of Maternal
Healthcare in Assam
Design Project II
Minal Jain (10020526)
Mannu Amrit (10020523)
Guide- Prof. Keyur Sorathia
Course Co-ordinator – Asst. Prof Abinash Kumar Swain
In collaboration with IBM Research & EI Lab, IIT Guwahati
1
2. Acronyms Used –
NRHM – National Rural Health Mission
ANM – Auxiliary Nurse Mid-wife
ASHA – Accredited Social Health Activist
PHC – Primary Health Center
MO – Medical Officer
ICT – Information and Communication
Technology
CHC – Community Health Center
PW – Pregnant Woman
ANC – Antenatal Care
TLX – Task Load Index
ML – Male Literate
MLL – Male Low Literate
MOL – Male Old Literate
MOLL – Male Old Low Literate
FL – Female Literate
FLL – Female Low Literate
FOL – Female Old Literate
FOLL – Female Old Low Literate
2
3. Figures and Images Used
Fig.1. - ANMs at Moriyapati sub-center
Fig.2. - A set of static gestures (Paper 4)
Fig.3. - An analysis of the gestures (paper
4)
Fig.4. - Head sideways (right) to scroll to
the right
Fig.5. - Right leg upwards to select or
results
Fig.13. - Complete data obtained from the
user initially
Fig.14. - Cognitive Load and Raw TLX values
for all the users have been
calculated
Fig.15. - Comparatives analysis of the rating
of mental demand, temporal
demand, physical demand,
confirm a choice on right side;
performance, effort and frustration
horizontal scrolling from right side.
for pointing and touching.
Fig.6. - Fly right to select or confirm a
Fig.16. - Bar graph showing the preferences
choice on right side; horizontal
of users of different categories. It
scroll on right
shows that touching is majorly
Fig.7. - A screenshot of the system screen
preferred over pointing
Fig.8. - Observations being made from the
Fig.17. - Bar graph showing the weight
usability room
Fig.9. - Distribution of user groups
Fig.10. - User performing the tasks
Fig.11. - User performing the task
Fig.12. - Complete data obtained from the
user initially while compiling these
means of different parameters for
both touching and pointing.
Fig.18. - Graph showing the distribution of
cognitive load with age
Fig.19. - Graph showing the distribution of
cognitive load with education
3
4. Index
Brief History (Design Project II)
5
Studying Research Papers on Gesture
Based Systems
6
Critical Appraisal
10
Methodology for Designing Gestural
User Interfaces
11
Building the gesture vocabulary
Experiment
16
Conclusion
28
References
4
15
28
5. Project History
(Design Project I)
According to the Sample Registration
Services (SRS) 2004-2006, the MMR* for
Assam was 480 per 100,000 live births the highest in the country. India's MMR
was 254.
The study aimed at investigating
existing problems faced by ASHA
members and ANMs, their work
environment, their role in safe and
healthy motherhood, relationship with
pregnant women (PW) and family
members, technology literacy and
opportunities for Information
Communication Technology (ICT)
interventions to empower maternal
health scenario.
Two sub-centres (SC), one Anganwadi
centre, one primary health centre (PHC),
one civil hospital and one community
health centre (CHC) were visited and
observed. 12 one-to-one on-field
interviews were conducted with ASHA
members, ANMs, PW and Doctors at the
Primary Health Centre.
Publications from Design
Project I
Research analysis was done using
affinity and six use cases were prepared
based on that.
Keyur Sorathia, Minal Jain, Mannu
Amrit, Denny George, Jagriti Kumar
and Amit Ranjan, Research Findings,
Analysis and Design Opportunities for
Empowerment of Maternal Health in
Assam, India,
In workshop on Intelligent User
interfaces for Developing Regions
(IUIDR), International Conference on
Intelligent User Interfaces, CA, USA
(19-22nd March)
Fig.1. ANMs at Moriyapati sub-center
Keyur Sorathia, Mannu Amrit and
Minal Jain, Research Findings,
Analysis and ICT Interventions for
Empowerment of Maternal Health in
Assam, India,
In International Conference on Global
Research Association for Development
and Excellence (International Journal
of Research in Engineering & Applied
Science (ISSN:2294-3905))
5
6. Studying Research Papers on
Gesture Based Systems
are like slapping the phone to mute the
ring tone.
Following papers were studied and
presentations were held every
Wednesday discussing these–
Suspenseful - Suspenseful gestures have
manipulations revealed but the effects
hidden. For example drawing an
exaggerated “X” mark in air to turn
silent profile ON
1. Would you do that? Understanding
Social Acceptance of Gestural
Interfaces Calkin S. Montero, Jason
Alexander, Mark T. Marshall, Sriram
Subramanian
This paper presents main factors that
influence gestures’ social acceptance
including culture, time, interaction type
and the user’s position on the
innovation adoption curve.
They claim that user performance or
manipulation of a device along with the
visible results of that performance or its
effects is a vital element that influences
social acceptance. Gestures have been
classified into four categories based on
manipulation vs effect plane Expressive - Expressive gestures with
both manipulations and effects visible
6
Secretive - It has both the manipulation
and effects hidden such as tapping on
the phone to change its volume when
talking.
Magical - Magical gestures have their
manipulations hidden but the effects
revealed or amplified.
Conclusion - Both secretive gestures and
expressive gestures have a greater
chance of being socially acceptable,
whereas suspenseful gestures are more
often seen socially unacceptable.
2. Wave Like an Egyptian —
Accelerometer Based Gesture
Recognition for Culture Specific
Interactions Matthias Rehm, Nikolaus
Bee, Elisabeth André
7. This paper uses Wiimote to uncover the
user’s cultural background by analyzing
his patterns of gestural expressivity in a
model based on cultural dimensions.
With this information at hand, the
behavior of an interactive system can be
adapted to culture-dependent patterns
of interaction. This paper uses embodied
conversational agents as interface
metaphor. According to authors, it has
great potential to provide:
(i)
Information presentation
(ii)
Entertainment
(iii)
Serious games
3. Evaluating Performance and
Acceptance of Older Adults Using
Freehand Gestures for TV Menu
Control Jan Bobeth, Susanne Schmehl,
Ernst Kruijff, Stephanie Deutsch,
Manfred Tscheligi
and acceptance of freehand gestures by
implementing several techniques and
conducting a user study with 24 older
adults.
4. Free-Hand Gestures for Music
Playback: Deriving Gestures with a
User-Centred Process Niels Henze,
Andreas Löcken, Susanne Boll
In the user study, four different kinds of
freehand gesture interaction were
compared to control a corresponding TV
menu, investigating specifically on
abilities of older adults. Each of the
interaction types was analysed
regarding task completion time, error
rate, usability and acceptance.
A refined process for deriving gestures
from constant user feedback is
proposed. Along this process a set of
free-hand gestures for controlling music
playback is developed. Two gesture sets
containing static and dynamic gestures
are derived and analyzed in a
comparative evaluation. Participatory
design method was used.
Results showed that directly
transferring tracked hand movements to
control a cursor on a TV achieved the
best performance and was preferred by
the users.
In this paper, the authors tried to
explore alternative TV menu control
methods, focusing specifically on older
users. They investigated performance
A set of static gestures (Paper 4)
Fig.2. A set of static gestures (Paper 4)
7
8. The classification results are used along
with the feature vector to generate a
combination of sounds and images that
change in real time depending on the
person’s facial expressions.
Fig.3. An analysis of the gestures (paper 4)
5. Facial Expression Recognition as a
Creative Interface Roberto Valenti,
Alejandro Jaimes, Nicu Sebe
An audio-visual creativity tool that
automatically recognizes facial
expressions in real time, producing
sounds in combination with images was
developed.
The facial expression recognition
component detects and tracks a face and
outputs a feature vector of motions of
specific locations in the face. The feature
vector is used as input to a Bayesian
network which classifies facial
expressions into several categories (e.g.,
angry, disgusted, happy, etc.).
8
responded to the gestures of the
game
3. Provide a guideline to design
gestural user interfaces for
institutionalized older adults
6. Full body motion based game
interaction for older adults Kathrin
M. Gerling, Ian J. Livingston, Lennart E.
Nacke, Regan L. Mandryk
7. Teaching Natural User Interaction
Using Open NI & Microsoft Kinect
Sensor
This paper describes how full-body
motion-control games can accommodate
a variety of user abilities, have a positive
effect on mood and, by extension, the
emotional well-being of older adults
Kinect offers opportunities for novel
approaches to classroom instruction on
natural user interaction. Current state of
this technology evaluated and overview
of some of the development frameworks
presented.
This paper presents 3 main studies 1. Identification of appropriate
gestures to support video games
for institutionalized older adults,
evaluate it and design a video
games based on identified
gestures
2. Design of a video game and
evaluate how participants
Examples were presented to show how
Kinect assisted instruction can be used
to achieve some learning outcomes in
HCI courses. The paper concluded and
verified that OpenNI, with
accompanying libraries, can be used for
these activities in multi-platform
learning environments.
9. 8. Kinect in the Kitchen: Testing
Depth Camera Interactions in
Practical Home Environment
Research takes the Kinect into real-life
kitchens, where touchless gestural
control could be a boon for messy hands,
but where commands are interspersed
with the movements of cooking. A recipe
navigator, timer and music player are
implemented
Users were allowed to change the
control scheme at runtime and navigate
with other limbs when their hands are
full.
9. Wiimote and Kinect: Gestural User
Interfaces add a Natural third
dimension to HCI
Paper presents two systems specifically
designed for 3D gestural interaction on
3D geographical maps. The proposed
applications rely on two consumer
technologies both capable of motion
tracking: the Nintendo Wii and the
Microsoft Kinect devices.
10. Using the Kinect to Encourage
Older Adults to Exercise: A Prototype
and verification for educational games
for deaf children.
The study aims to find the factors that
play an important role in motivating
older adults to maintain a physical
exercise routine. System was tested with
5 users in the age group of 20 to 30
overall positive response was obtained
11. Super Mirror: A Kinect Interface
for Ballet Dancers
Super Mirror, a Kinect-based system
combines the functionality of studio
mirrors and prescriptive images to
provide the user with instructional
feedback in real-time.
The research is focused on questions
about user control of the system, system
recognition of position data, and user
feedback.
12. American Sign Language
Recognition with the Kinect
The paper aimed at investigating the
potential of the Kinect depth-mapping
camera for sign language recognition
9
10. Critical Appraisal:
Our research showed that no work had
been done in the area of gesture based
systems for people in rural areas in
developing countries, especially in the
field of health. We took this as an
exploration combining gesture based
interaction with spoken web technology
of IBM Research to solve the gripping
problem of high maternal mortality in
rural Assam. Literature research on
gesture based systems designed
previously helped us in understanding
the methodology followed in different
project and evolve our own
methodology. A gesture vocabulary was
created for references. An experiment
was conducted with people of both
genders from all age groups in both low
literate and literate categories.
Comprehensive analysis of the system
produced results. In the end a
methodology for designing gesture
based systems was proposed.
10
11. Methodology for Designing
Gestural User Interfaces
Identify right functions (e.g. stop,
play, pause etc.) – 1.0
Based on the literature research a
methodology for designing gestural user
interfaces was proposed.
Identify right set of functionalities your
system will require. Explain each
functionality in detail to have a clear
understanding of the functions.
The human based principles should
make the gestures:
e.g. skip: it will be used to skip contents
on sub modules
•
Easy to perform and remember
User testing – 2.0
•
Intuitive
•
Metaphorically and iconically logical
towards functionality
Find the gestures that represent
functions found in step 1.
•
Ergonomic; not physically stressing
when used often
In order to achieve these principles it is
necessary to take usability theory, and
biomechanics/ergonomics.
Following are the stages of gesture
system design as proposed by us for the
project on maternal health in rural
assam –
• Preparation:
•
Categorize the study into
prestudy, during study and post
study sections
•
Prestudy: prepare introduction
document and all required
functions
•
During study: prepare space,
video camera, projector & a
scenario video (to be presented
to users, e.g. a small video &
stop function is tested)
• Post study: remuneration,
signature, verification of
function-gesture question
• Study
• 20 PW will be recruited*
• Users must be introduced to
task. A demo of the task is
required*
• Voice and gesture both should
be encouraged.
• Complete task must be recorded
- voice recording of researcher
explaining the tasks, users
performing tasks and postperformance questions.
• Explain user a scenario and ask
them to perform gestures for a
specific function.
• Use video recording and written
notes for documentation of
performed gestures.
11
12. It is important to design the
experiments in a way that users use the
gestures in a natural way.
User’s social acceptance:
- Did they feel comfortable or
uncomfortable, awkward or natural,
relaxed or embarrassed?
This will lead to an overall positive or
negative impression of the task or
technology.
Spectator’s social acceptance:
User actions are performed in a range of
public and private situations, i.e.
contexts.
- User performance or manipulation of
a device along with the visible results
of that performance or its effects is a
vital element that influences social
acceptance
- It has stronger impact on spectator’s
social acceptance. If an interaction is
too loud or obtrusive and there is no
real meaning to it from the
spectator’s view, a negative
impression will form
- Technology must perform good to
increase social acceptance from users
Analysis of user testing – 3.0
- Does the audience understand what
the user is doing?
• Extract commonly used gestures and
note how consistently users use
them.
- Do they think the action is ‘weird’ or
‘normal’?
• Understand whether those are static
or dynamic gestures.
- The spectator quickly builds a
positive or negative impression of the
user’s actions.
• On dynamic gestures, capture a video
or frames to document it.
Manipulation vs. effect
12
Selection of gesture should take into
account:
•
Evaluate internal force caused by
posture
• Deviation from neutral position
• Outer limits
• Forces from inter-joint relations
• Evaluate frequency and duration of
that gesture
Analysis of user testing – 3.1
Classification of gestures
• Expressive
e.g. slapping the phone to mute ring
tone
• Suspenseful
e.g. drawing “x” in air to delete
contents
• Secretive
e.g. tapping the phone to change its
volume
• Magical
13. Gestures are hidden but feedback is
revealed
• Deictic, propositional etc. more new
forms can be identified and classified
Analysis of user testing – 3.2*
Evaluate possible and potential gestures
with team of doctors. This study will
help us identify gestures those can be
potentially performed by PW from
different trimester
Analysis of user testing – 4
Test the gesture vocabulary
• Translate all gestures in “Assamese”
language*
• Guess the function
• Give users a list of functions.
• Prepare a set of videos
explaining each function
through a gesture
• Present the gestures and ask the
person to guess the functions
• Score = errors divided by
number of gestures
• Memory
• Give them a demo of all gestures
& associated functions*
• Give them a 10 minutes break*
• Present a slideshow of names of
functions in a swift pace, 6
seconds per function. Users are
asked to perform gestures when
function is presented on
slideshow
• Score = number of restarts
• Stress
• Identify right sequence of
gestures*
• Identify how many times this
sequence has to be performed*
• The user must perform the
sequence X times, where X
times the size of gesture
vocabulary equals 200. Between
each gesture go back to neutral
hand position.
• Note down other observations
during the study. E.g. User was
stressed due to a specific
gesture
• Stress
Use the following score list for each
gesture and overall for the sequence
• No problem
• Mildly Tiring/Stressing
• Tiring/Stressing
• Very annoying
• Impossible
Likert scale can be used to evaluate
above parameters*
• Social acceptance
• 10 sec. video of each gesture
will be showcased to
participants.
13
14. •
•
Q1- What would you think if you
saw someone else performing
this gesture (for example, when
walking down the street)?
Participants will be asked to
give 2-3 keyword answer for
every gesture and then fill Q2
•
Q2- How would you feel
performing this gesture in
public space? The Likert scale
ranged from 1 (Embarrassed) to
6 (Comfortable) will be given to
them. This scale will give us
insights on the social
acceptance of the gestures
•
14
They will be asked two
questions: an open question
(Q1) and a six-point scale data
question (Q2)
Social acceptance needs to be
understood more in-detail
15. Building the gesture vocabulary
3D Gesture Documentation was an
attempt to provide an overview of
possible 3D gestures, which can be
implemented in gestural user interfaces
for variety of purposes. Possible
functionality of the gesture was also
highlighted.
•
Lower body gestures
Gestures that involve lower body
(below waist) movements
The gestures are divided into three
major sections:
•
Fig.6. Fly right to select or confirm a choice
on right side; horizontal scroll on right
Upper body gestures
Gestures that involve upper body
(above waist) movements
Fig.5. Right leg upwards to select or confirm
a choice on right side; horizontal scrolling
from right side
•
It is gathered from existing literature
and on-going research on identification
of appropriate gestures for social
acceptance.
Full body gestures
Gestures that involve full body
movement
Fig.4. Head sideways (right) to scroll to the
right
15
16. Experiment
Ms Sumitha Sharma visited the EI Lab
from Speech Based and Pervasive
Interaction Group, Tampere Unit for
Computer Human Interaction University
of Tampere, Finland. A study was
conducted with her to analyse the
comfort levels of literate males and
female below and above the age of 35
and low literate males and females
below and abve 35 years of age. Also, a
comparitive analysis was conducted to
understand their preference between
pointing at the option and touching the
corresponding body parts while using
gesture based system.
The system contained information about
the head, neck, shoulder and stomach:
basic functionality of each part and what
ailments it is most prone to. Microsoft
Kinect was used to track the user’s
upper body movements and the 3D
model and video content was rendered
using the Panda 3D graphics engine. The
core logic application that controlled the
output based on the user’s Kinect data
was coded in Python. An added feature
was the wave gesture to return to the
menu screen from the currently playing
video content.
System
We developed a health information
system that used free form gestures as
input and provided audio-visual content
in Assamese as output. When the system
detects a user, a 3D Assamese lady
avatar introduces the system to the user,
explaining how to interact using two
selection methods: pointing and
touching. User can point to icons on the
menu screen or touch that particular
body part with their right hand to
trigger a selection, as shown in figure 7.
16
Fig.7. A screenshot of the system screen
User Study
A user study was conducted with native
Assamese users in March 2013. Initially,
we asked the users to try either pointing
or touching depending on what they
preferred but it was observed that
participants would only try touching
since it was explained last in the
introductory video. This made it difficult
to ask them what they preferred if they
only tried one method. To be able to
compare between the two selections
methods, we changed the task to include
both pointing and touching for each
user. Thus the user study is divided in
two parts: one with 9 users who tried
the system once and the other with 25
users who were explicitly asked to try
both pointing and touching as two
separate tasks. After each tasks, users
were asked to answer the NASA TLX
rating and weights comparison. At the
end of the study, users were also
interviewed. Each user was given Rs 200
as remuneration.
1. Setup
The setup consisted of a laptop running
the system and connected to a Kinect for
user tracking, speakers for audio and a
51” LCD TV displaying the graphical
17. output. Participants were asked to step
in between two lines marked on the
floor, 1 meter apart and three meters
away from the TV, as shown in the figure
8. A camera was kept inside the room
that recorded the actions of the user and
this was further connected to a usability
room where observations were made by
us.
Fig.8. Observations being made from the
usability room
1. Participants
There were 37 participants in total,
consisting of both male and female low
literate and educated users. Users were
classified based on gender, age and
education level where user above 35
years of age were considered old and
users with more than 10+ years of
schooling were considered educated.
This gave us 8 users groups following
the convention: FL, FLL, FOL , FLL
(female literate, female low literate,
female old literate, female old low
literate) and similarly for the male users
(ML, MLL, MOL, MOLL). Out of these 37
participants, 9 were asked to do only
one task so their data is incomplete for a
direct TLX comparison, and 3 didn’t
answer the NASA TLX completely either
because they were too tired or in a
hurry to leave. Thus the remaining 25
users preformed two separate tasks
(one for pointing and one for touching)
where they were asked to select any two
of the four body parts for information
which they did not have to remember.
The user profile and distribution of
those 25 users is shown in the figure
below:
2. Procedure
Of those 25 participants, each
participant was first asked to try an
exercise session where they were shown
a human shadow on TV that imitated
their upper body movements. This was
done for two reasons: first to get a fair
idea of how well the Kinect was able to
track the users and second to allow the
user to familiarize herself / himself with
the on-screen shadow. Users were asked
to just wave their hands in the air or
perform any gesture of their liking for as
long they felt comfortable. Then users
were asked to find information about
any two body parts by first pointing and
then later touching (or vice versa).
There was no time limit for any of the
tasks and users could select more than
two options if they so wished. After each
task, users were asked to fill in the NASA
TLX system evaluation form. Since a lot
of the users were not comfortable with
this type of questionnaire or were not
familiar with the NASA TLX terminology,
moderators translated the six sub-scales
ratings
and
comparisons.
After
completing both the tasks and their
NASA TLX evaluations, users were asked
to answer three interview questions:
Fig.9. Distribution
of user groups
17
18. a. Of the two selection methods,
which one did they prefer and
why?
b. Give one positive feedback and
one negative feedback about
their interaction with the system.
c. If they would be open to using
such a system in the future.
3. Observations
Introduction of ASHA health worker as a
3D character was found effective among
low literate users. Users performed
“namaste” (hello) and “dhanyavad”
(thank you) in front of the system. After
understanding the system, low literate
users started expecting more from the
system. They were found touching their
knee, back and other body parts. One
user started verbally explaining her
back pain to the system, indicating a
strong relationship built with system. As
compared to low literate users, literate
users were found emotionally less
connected to the system. They did not
perform “namaste” or “dhanyavad” to
the system, instead asked about
advanced features such as more
language options and increase/decrease
18
in volume etc. They also found the
introduction video too. One user
mentioned, “This kind of a system can be
used at home, but not in public space”,
showcasing
literate
users’
less
willingness to use gesture based
interfaces in public space.
System’s technical performance was
found critical for results. Inaccurate
system performance confused users as
they continuously looked back towards
the moderator for help.
One user got tensed as head touching
did not work. She asked whether there
is any problem in her head which had
caused this error. Between touching and
pointing, users whose touching was
found inaccurate preferred pointing as a
gesture modality, while users whose
touching was found accurate preferred
touching. Similarly, visual feedback
played important role in selecting
pointing as a preference gesture
modality.
Fig.10. User performing the tasks
One user said that they liked pointing
because it showcased their hand on
screen while selecting”. Touching also
had a visual feedback (icon was
changing its colour when touching a
specific body part), however users could
not relate with visual changes. A few
low literate users mistook the content /
system as being able to take their x-rays
or see inside of them. It showed that
users are easily misled into believing the
system is more capable than it is and
trust it blindly.
A lot of the users would keep touching
the relevant body part even after the
video started playing. It shows the need
to find a way to define the exact gesture
a system takes as input. Currently, users
felt that anything they did would ‘do
something’ to the system.
19. Traditionally, female users in India wear
sari and salwar-kamiz for their daily
routine. Low literate users (old & young)
and literate users (old) performed all
tasks wearing traditional cloths. While
performing tasks, one user’s pallu
moved from her salwar-kamiz, which
was detected as input from the system,
due to which user got confused. Low
literate users relate the system with
touch based interfaces. Two users tried
reaching to television to touch preferred
contents. One user mentioned, “I have
seen this system in a local museum”
which actually was a touch enabled
interface. It is important for a system to
inform users that this is not a touch
based system.
NASA TLX method, parameters and
ratings are provided in English. Post
study questionnaire in local language
using NASA TLX method was found very
difficult. Researchers had difficulty in
explaining various “demands” in local
language Assamese/Hindi, due to which
it was difficult for users to compare
between demands, especially for low
literate users.
For touching, users were not able to
relate to the Human shadow on the
screen (even after the exercise session),
and thus it seems that users' felt there
was immediate feedback only for
pointing (hand cursor on the screen).
This is interesting because for the
exercise session they were able to relate
to the shadow. Also, with users so new
to such a gesture based system, it seems
that users found this mapping difficult to
recall.
Fig.11. User performing the task
19
20. Following tables and graphs show the analysis of the study. Fig 12 shows the table which has all the data collected from the user
organised in a tabular form. Cognitive Load, RAW TLX, Mean and standard deviation have been calculated additionally.
Fig.12. Complete data obtained from the user initially while compiling these results.
20
22. In the following table, the cognitive load and Raw TLX for each user’s touching and pointing has been calculated.
Fig.14. Cognitive Load and Raw TLX values for all the users have been calculated.
22
23. The above shows the mean and standard deviation for all the parameters. It is observed that the values of mean and standard deviation
are found quite close. Using T-test, p-value for comparison between pointing and touching for all the users is - 0.27194
Hence not much of significant results are obtained here. The following graphs give a few insights.
.
Fig.15. Comparatives analysis of the rating of mental demand,
temporal demand, physical demand, performance, effort and
frustration for pointing and touching.
23
24. Fig.16. Bar graph showing the preferences of users of different
categories. It shows that touching is majorly preferred over pointing.
Fig.17. Bar graph showing the weight means of different parameters
for both touching and pointing.
24
25. Following were a few user statements
Pointing could be controlled
better (ML).
Preferred touching body part to
pointing as it required less
physical movement (ML)
Preferred touching the body part
as pointing wasn’t natural (MOL)
In the above tables, we observe that the
p value obtained by the T-test is <0.05
which shows that the difference
between the quantities is not very
significant. However, we feel such is
result is due to factors like system
errors and inaccuracies and the
difficulty faced while conducting NASA
TLX due to language problem as well as
improper translation of the meanings of
them terms.
25
28. Conclusion
Based on the study conducted above, the
methodology proposed was refined. It
will now be used in the subsequent
months for designing the gesture based
system. In the experiment, although
quantitative results show no clear
distinction between pointing and
touching, qualitative analysis throws
light upon factors like system errors and
inaccuracies and the difficulty faced
while conducting NASA TLX due to
language problem as well as improper
translation of the meanings of them
terms that might have influenced the
study. A research paper is being written
which will be submitted for the 17th
ACM Conference on Computer
Supported Cooperative Work and Social
Computing (CSCW 2014).
References
1. Would you do that? Understanding
Social Acceptance of Gestural Interfaces
Calkin S. Montero, Jason Alexander, Mark
T. Marshall, Sriram Subramanian
2. Wave Like an Egyptian —
Accelerometer Based Gesture
Recognition for Culture Specific
Interactions Matthias Rehm, Nikolaus
Bee, Elisabeth André
3. Evaluating Performance and
Acceptance of Older Adults Using
Freehand Gestures for TV Menu Control
Jan Bobeth, Susanne Schmehl, Ernst
Kruijff, Stephanie Deutsch, Manfred
Tscheligi
4. Facial Expression Recognition as a
Creative Interface Roberto Valenti,
Alejandro Jaimes, Nicu Sebe
5. Free-Hand Gestures for Music
Playback: Deriving Gestures with a UserCentred Process Niels Henze, Andreas
Löcken, Susanne Boll
28
Full body motion based game
interaction for older adults Kathrin M.
Gerling, Ian J. Livingston, Lennart E.
Nacke, Regan L. Mandryk
Teaching Natural User Interaction Using
Open NI & Microsoft Kinect
SensorNorman Villaroman, Dale Rowe,
Bret Swan
Kinect in the Kitchen: Testing Depth
Camera Interactions in Practical Home
Environment Galen Panger
Wiimote and Kinect: Gestural User
Interfaces add a Natural third dimension
to HCI Rita Francese, Ignazio Passero,
Genoveffa Tortora
Using the Kinect to Encourage Older
Adults to Exercise: A Prototype
Samyukta Ganesan Lisa Anthony
Super Mirror: A Kinect Interface for
Ballet Dancers Zoe Marquardt, João
Beira, Isabel Paiva, Natalia Em, Sebastian
Kox
American Sign Language Recognition
with the Kinect Zahoor Zafrulla, Helene