MAGIC: A Motion Gesture Design Tool
Daniel Ashbrook, Georgia Tech and Nokia Research Center Hollywood
Thad Starner, Georgia Tech
http://research.nokia.com/files/2010-Ashbrook-CHI10-MAGIC.pdf
Presented at the 28th Annual ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)
Abstract:
Devices capable of gestural interaction through motion sensing are increasingly becoming available to consumers; however, motion gesture control has yet to appear outside of game consoles. Interaction designers are frequently not expert in pattern recognition, which may be one reason for this lack of availability. Another issue is how to effectively test gestures to ensure that they are not unintentionally activated by a user’s normal movements during everyday usage. We present MAGIC, a gesture design tool that addresses both of these issues, and detail the results of an evaluation.
52. Gestures to define:
• Play/Pause • Volume Up 10%
• Shuffle • Volume Down 10%
• Next Track • Next Playlist
• Previous Track • Previous Playlist
28
53. Gesture criteria:
• The gesture must reliably activate the desired function.
quantitative • Performing the gesture must not activate other functions.
• The functionality associated with a gesture must not be
activated by a user’s everyday movements.
• The gesture should be easy to remember.
qualitative • The gesture should be easy to perform.
• The gesture should be socially acceptable.
29
54. participants
Num Num UCD UI design Pattern rec
Condition participants Age
female experience experience experience
no EGL 7 29.0 2 5.9 6.7 4.0
EGL 7 31.6 2 6.6 5.6 3.0
(1-9) (1-9) (1-9)
recruited participants with UI and/or UCD experience
30
55. collected EGLs
EGL age gender occupation hours collected
1 32 M PhD CS 19:18
2 26 M MS CS 9:33
3 27 F Librarian 7:40
4 28 M Civil Engineer 6:53
5 30 F IT professional 4:50
6 37 M IT professional 4:08
7 28 F Writer 3:57
8 29 F Project manager 2:09
Total 29.6 4m/4f 58:28
31
56. collected EGLs
EGL age genderActivities
EGL occupation hours collected
1 32 • PhD CS
M attending conference 19:18
2 26 • MS CS
M brewing beer 9:33
3 27
• knitting
F vacationing at beach
Librarian 7:40
•
4 28 M vacationing at mountains
• Civil Engineer 6:53
5 30 • IT professional
F working at computer 4:50
6 37 • hiking
M IT professional 4:08
• cooking
7 28 F home repair
Food writer 3:57
•
8 29 F attending manager
• Project work meetings 2:09
•
Total 29.6 4m/4fmaking cheese 58:28
32
66. user performance
about 2h to
complete task
mostly linear
progression:
create, test, EGL
37
67. user performance
about 2h to
complete task
mostly linear mostly linear progression:
progression: add class, make examples,
create, test, EGL troubleshoot
37
69. feedback
general
“This is really hard!” (but…) “I really like the
software and I really like the interface”
“Gesture creation was easy”
“It’s pretty awesome… it’s really fun”
“I liked the task” … “I think it’s really interesting”
39
70. feedback
egl
“I just kinda feared the EGL”
(regarding video) “I
didn't care why I was hitting the
[EGL]; I can't change what's in there”
“It was really useful to compare both hat-mounted
videos” (EGL video and gesture video)
The EGL tab is “very important”
40
71. feedback
visualizations
Graphs were “something a little bit complicated
for me”
“It takes a while to build up an intuition about what
[the graphs] mean”
“I thought I needed to understand [the graphs]
more”
“[The graphs] give a good reflection of what I’ve
done”
41
75. future work
• improve speed, responsiveness
• easier to understand visualizations
• test out other sensors, algorithms
• gesture & example annotations
43
76. see paper for:
• related work
• users’ gesture strategies
• statistics!
78. ISWC'10
IEEE International Symposium on Wearable
Connected Smartware - Wearable
Computers
Oct. 10th - 13th, 2010, COEX, Seoul, South Korea
Topics include
On-body, mobile Systems, usability, HCI and human factors in wearable
computing, applications of wearable systems
Submission deadline
21th, May, 2010
Papers, notes, posters, & design contest
30th, May, 2010
Videos, late breaking results, demonstrations, Ph.D.
forum, & tutorials
http://www.iswc.net
79. user performance
gesture goodness
precision · recall F-measure, or
goodness = 2 · harmonic mean of
precision + recall precision and recall
47
80. user performance
gesture goodness
precision · recall F-measure, or
goodness = 2 · harmonic mean of
precision + recall precision and recall
89% 95%
goodness goodness
p=80%, r = 100% p=100%, r = 90% 47
81. user performance
gesture goodness
precision · recall F-measure, or
goodness = 2 · harmonic mean of
precision + recall precision and recall
89% 95% 82%
goodness goodness goodness
p=80%, r = 100% p=100%, r = 90% 47 p=78%, r = 88%
82. user performance
gesture goodness
precision · recall F-measure, or
goodness = 2 · harmonic mean of
precision + recall precision and recall
89% 95% 82% 100%
goodness goodness goodness goodness
p=80%, r = 100% p=100%, r = 90% 47 p=78%, r = 88% p=100%, r = 100%
UI designers don’t have to know about USB protocols for mouse clicks, so gesture designers shouldn’t have to know math. But if they do, we want to support it!
http://www.flickr.com/photos/chromewavesdotorg/528726488/sizes/m/
http://www.flickr.com/photos/ervega/3709443341/sizes/o/
UI designers don’t have to know about USB protocols for mouse clicks, so gesture designers shouldn’t have to know math. But if they do, we want to support it!
http://www.flickr.com/photos/chromewavesdotorg/528726488/sizes/m/
http://www.flickr.com/photos/ervega/3709443341/sizes/o/
auto start/stop record examples
contextual video - play back for retrospection
goodness = harmonic mean of precision & recall; see paper for more
goodness = harmonic mean of precision & recall; see paper for more
goodness = harmonic mean of precision & recall; see paper for more
goodness = harmonic mean of precision & recall; see paper for more
goodness = harmonic mean of precision & recall; see paper for more
graphically locate outliers
first: tutorial, then: task
criteria for realistic scenario - given to users
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
remainder is tested along with other volunteer-collected egls
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
noEGL is 27 times worse!
feedback: system is slow
feedback: system is slow
feedback: system is slow
be quick!
predominantly HCI audience; will explain some ML
be quick!
predominantly HCI audience; will explain some ML
be quick!
predominantly HCI audience; will explain some ML
be quick!
predominantly HCI audience; will explain some ML
non-expert: yes—most participants weren’t expert in pattern rec. In general, no stat. sig. diff. for patt. rec (see doc for more): PR experts didn’t do better, but that means non-pr didn’t do worse!
non-expert: yes—most participants weren’t expert in pattern rec. In general, no stat. sig. diff. for patt. rec (see doc for more): PR experts didn’t do better, but that means non-pr didn’t do worse!
non-expert: yes—most participants weren’t expert in pattern rec. In general, no stat. sig. diff. for patt. rec (see doc for more): PR experts didn’t do better, but that means non-pr didn’t do worse!
expert: somewhat; can pick which gestures to include, can adjust threshold; 3 participants self-rated as expert and seemed to have easier time understanding graphs.
could do more: tweak algorithm parameters, switch out back-end recognition, see confusion matrix
Yes, definitely some. Lots of effort in creation == don’t want to change a lot if failure in EGL. But many did. System speed was big issue curbing iteration.
Yes! Retrospection was highly popular & heavily used.
EGL lets you test new gestures and new recognizers
video can be used later as well for training or social acceptability testing
situation: jogging, sleeping, normal life?
example target users:
children vs adults
or
disabilities; people with different abilities need different EGLs (Parkinson’s vs able bodied)
other issue: interface designers are expert in interface design, not pattern recognition. Nor should they have to be!