SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Eye and Pointer Coordination in Search and Selection Tasks
           Hans-Joachim Bieg*,                                             Harald Reiterer                                    Heinrich H. Bülthoff
    Lewis L. Chuang, Roland W. Fleming                           Human-Computer Interaction Group                      Max Planck Institute for Biological
     Max Planck Institute for Biological                          University of Konstanz, Germany                     Cybernetics, Tübingen, Germany and
      Cybernetics, Tübingen, Germany                                                                                Dept. of Brain and Cognitive Engineering,
                                                                                                                         Korea University, Seoul, Korea

Abstract                                                                                       Eye and hand coordination has been extensively studied for
                                                                                               reaching and pointing actions with the hands [e.g. Neggers and
Selecting a graphical item by pointing with a computer mouse is                                Bekkering 2000] but not for tasks with a computer input device
a ubiquitous task in many graphical user interfaces. Several                                   in a graphical user interface. A first study was carried out by
techniques have been suggested to facilitate this task, for in-                                Smith et al. [2000] to examine differences in the coordination of
stance, by reducing the required movement distance. Here we                                    eyes and pointer movements when using various computer input
measure the natural coordination of eye and mouse pointer                                      devices. Smith and colleagues found that participants employed
control across several search and selection tasks. We find that                                three different eye and hand coordination strategies when carry-
users automatically minimize the distance to likely targets in an                              ing out simple target acquisition tasks: a) eyes leading the poin-
intelligent, task dependent way. When target location is highly                                ter to the target (target gaze), b) eyes directly tracking the poin-
predictable, top-down knowledge can enable users to initiate                                   ter (following), and c) switching between target and pointer
pointer movements prior to target fixation. These findings ques-                               (switching). Similar patterns were found by Law et al. [2004]
tion the utility of existing assistive pointing techniques and                                 albeit with a specialized laparoscopic tool and not with a com-
suggest that alternative approaches might be more effective.                                   mon computer pointing device.

CR Categories: H.5.2 [Information Interfaces and Presentation]                                 In the present study we investigated the coordination of eyes and
User Interfaces – Input devices and strategies. H.1.2 [Models                                  pointer movements on several selection tasks. In contrast to
and Principles] User/Machine Systems – Human factors                                           previous studies, these tasks required visual search for the target
                                                                                               item before selection. Such a combination of search and selec-
Keywords: Eye movements, Eye-tracking, Eye-hand Coordina-                                      tion is a staple of numerous human-computer interaction tasks
tion, Multimodal Interfaces, Input Devices                                                     [Fleetwood and Byrne 2008]. We wanted to see whether and
                                                                                               how visual search affected the coordination of eye and mouse
                                                                                               pointer movement.
1      Introduction
                                                                                               2     Method
A fundamental task in modern computer interfaces is the selec-
tion of a graphical item with a pointing device (e.g. a mouse).
The difficulty of such a pointing task is well described by Fitts’                             2.1     Participants
Law as the logarithm of the amplitude (distance to the target) of
the movement divided by the target width (the required accura-                                 The experiment was performed on 12 participants with normal
cy) [MacKenzie 1992]. Several techniques have been developed                                   or corrected-to-normal vision. Data from one participant was
to facilitate pointing tasks by decreasing their difficulty along                              removed because of large eye-tracker calibration errors. All
the lines of Fitts’ Law [Balakrishnan 2003]. For instance, tech-                               participants were regular computer and mouse users.
niques exploit eye-tracking technology to move the pointer to
the location of a detected eye fixation on the computer screen                                 2.2     Tasks
[Zhai, et al. 1999; Bieg et al. 2009; Drewes and Schmidt 2009]
or to select one of several pointers near the fixation point [Räihä                            Three tasks were presented (T1-T3) that required searching for
and Špakov 2009; Blanch and Ortega 2009], thereby reducing                                     and selecting (i.e. clicking on) one or several target items or
the amplitude and thus also the difficulty of the pointing move-                               drag-and-drop of the target item (see Figure 1). A fourth task
ment. The underlying assumption is that the item of interest is                                was also presented during the experiment but is not discussed
fixated before it is acquired and that turn-taking occurs between                              here. Task T1 was a simple target acquisition task. Users started
eye and pointer movement to maximize movement time savings.                                    at a location 800 px to the right of the target item and moved a
The purpose of this work is to test directly whether this assump-                              graphical pointer with the mouse to select it and drag it over to
tion holds across a range of tasks.                                                            the circular area. Task completion was confirmed by clicking on
                                                                                               a button labeled “done” next to the circular area. In the second
*e-mail: hans-joachim.bieg@tuebingen.mpg.de                                                    task (T2) participants searched a 13 × 13 grid for a single target,
                                                                                               specified uniquely by shape (triangle, rectangle, circle) and color
                                                                                               (red, green, blue) before performing the same drag-and-drop
                                                                                               task. The position of the grid on the screen was fixed and the
                                                                                               locations of target and distractor items were randomized within
                                                                                               the grid. In task T3, the same amount of items were scattered
                                                                                               across the whole screen. Five target items had to be searched
© 2010 Association for Computing Machinery. ACM acknowledges that this contribution            and clicked on in this task. The size of each item was approx-
was authored by an employee, contractor or affiliate of the German Government. As such,
the Government retains a nonexclusive, royalty-free right to publish or reproduce this         imately 20 px (3.3 cm, 1.92° visual angle). Before each trial, a
article, or to allow others to do so, for Government purposes only.
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                          89
screen was shown to inform participants of the current target’s            trial took considerably longer (mean: 5.5 s, std. dev.: 0.60, 95%
color and shape. For reasons of consistency, this was also done            CI of difference: 1.5-2.3 s, t(10)=10.5, p<0.01). Fewer fixations
in task T1. This preparatory screen contained an instance of the           were made during the single-target task T1 (mean: 6.0, std. dev.:
target item and an “ok” button which had to be clicked in order            0.78) compared to the search task T2 (mean: 14.4, std. dev.:
to start the trial. For tasks T1 and T2, the button and item in-           2.37, 95% CI of difference: 6.7-10.0, t(10)=11.1, p<0.01). Thus,
stance were located at a constant position to the right of the             it took participants approximately 2 seconds longer to complete
screen. The button was at the exact same position as the “done”            task T2. This time was likely spent on the search for the target
button in the trial display. The item instance was above the               item and eye movements required for item identification.
button. For task T3, item and button were shown in the center of
the screen. With this arrangement, gaze and pointer position               Since almost 40% of trial completion time seems to be spent on
were at an approximate constant starting position for each trial           search, how do users coordinate pointing movements during this
(near the “done” button on the right side for T1 and T2 or center          time? Analysis of pointer movement in task T2 shows that the
of the screen for T3).                                                     pointer was already close to the target when the first fixation on
                                                                           the target item was made. For instance, in more than 80% of
                                                                           trials, the pointer had already moved half of its initial distance to
                                                                           the target (Figure 2 line a). For the simple selection task T1, the
                                                                           pointer’s distance to the target was generally larger. Still, in
                                                                           almost 60% of trials, the pointer had already moved more than
                                                                           half of the initial distance (Figure 2 line b). In task T2, the me-
                                                                           dian latency between the first fixation on the target item and the
                                                                           beginning of the pointer movement was -1.7 s. In T1, the median
                                                                           latency was -190 ms. In more than 60% of trials of T2 the poin-
                                                                           ter movement was initiated almost 1 s before the target had been
                                                                           fixated (Figure 3 line a). In trials of T1, the latency was shorter
                                                                           but still mostly negative.

Figure 1 Schematic representation of the experimental tasks.

2.3     Apparatus
Participants used an optical computer mouse to complete the
task (Logitech Optical Wireless). The experimental computer
ran Ubuntu Linux version 8.10. Participants sat 100 cm in front
of a large back-projected screen which displayed the graphical
stimuli at a rate of 120 Hz (resolution 1400 × 1050 px, size
94.1° × 89.1° visual angle, 2,15 × 1,97 m). While carrying out
the tasks, participants’ gaze was tracked with a mobile eye-
tracking system. The system allowed for unconstrained head
motion by combining a head mounted eye-tracker (SR Research
Eyelink II) with an infrared motion capturing system (Vicon MX
13 with four cameras) using the libGaze tracking library [Her-
holz et al. 2008]. With this system, gaze was computed in terms
of screen coordinates at 120 Hz.
                                                                           Figure 2 Cumulative histogram of the distance (as proportions
2.4     Procedure                                                          of the initial pointer-to-target distance of approximately 800 px,
                                                                           68° visual angle) between the pointer and the target item at the
Each experimental session lasted around 90 minutes with short              time of the first fixation on the target.
pauses every 15 minutes and regular recalibration of the eye-
tracker before each block. Prior to data collection, the tasks were        Two explanations for these findings seem probable. First, partic-
explained, demonstrated to the participants, and one round of              ipants moved the pointer immediately once the trial started by
practice trials was presented. Each participant then carried out           using knowledge from previous trials about the approximate
five blocks of tasks. Each block started with 10 trials of task T1,        target location (T1) or grid location (T2). This movement re-
followed by 10 trials of T2, and 3 trials of task T3.                      quired no visual guidance and was carried out while the eye
                                                                           made a saccade to the target to guide the final targeting phase of
3     Results                                                              the movement. Similar findings were also reported for hand
                                                                           movements by van Donkelaar et al. [2000].
It is well known from experiments on visual search that search
                                                                           A second explanation could be that, in the case of T2, partici-
for combinations of features (e.g. shape and color) take more
                                                                           pants parallelized search and pointer movement. To investigate
time as a function of the number of items and difficulty of dis-
                                                                           this, we analyzed pointer movements during task T3. In this task
crimination. In addition, the number of fixations is correlated
                                                                           target locations could not be predicted from memory. A typical
with search difficulty [Zelinsky and Sheinberg 1997]. We com-
                                                                           example of eye scanpaths and pointer motion is shown in Figure
pared trial completion times and number of fixations of tasks T1
                                                                           5. Trajectories for the vertical and horizontal dimension along
and T2 to see whether an effect of the added requirement to
                                                                           the time axis are shown in Figure 5 b and c. From the inspection
search an item prior to selection was present. The mean trial
                                                                           of individual trial data it seems that the pointer roughly followed
completion time for a simple drag-and-drop trial (T1) was 3.5 s
                                                                           the scanpath throughout the trial. To test this hypothesis, we
(std. dev.: 0.45). In the search condition (T2) completing the



                                                                      90
calculated the mean Euclidean distance between the location of              p<0.01). This indicates that participants generally moved the
the mouse pointer and eye gaze across all trials of T3 for all              pointer while searching and did not adopt a turn-taking strategy.
participants. The mean Euclidean distance was minimal when
the mouse pointer signal was shifted a few hundred milliseconds
forward in time (see Figure 4). Even though the extent of the
shift varied between participants (see participant means in Fig-
ure 4) it was positive for all participants. A positive shift can be
conceived as a shift of the mouse pointer path to the left in
Figure 5 b and c. This means the mouse pointer generally lagged
behind the eye movements but it does not give any clues to
whether it did so consistently throughout the trial.




Figure 3 Cumulative histogram of the latency between a fixation
on the target and movement initiation (the point in time when the
pointer was moved more than 40 px from the starting position).




                                                                            Figure 5 Typical eye scanpath and path of pointer motion for a
                                                                            single trial of task T3. Item locations are shown as gray circles
                                                                            (color/shape coding not shown) and target items are circles
                                                                            colored in red. Selection actions are depicted as blue stars.

                                                                            4     Conclusion

                                                                            In this study, we investigated the coordination of eye move-
                                                                            ments and mouse pointer movements during a range of search
                                                                            and selection tasks. In performing these tasks, participants use
                                                                            two main eye-hand coordination strategies. First, when the
                                                                            approximate location of the target is known, participants per-
                                                                            form pointer movements that are initiated without eye guidance,
                                                                            possibly in parallel to an eye movement to the target item.
Figure 4 Mean distance of the pointer to the gaze point on the              Second, in tasks requiring visual search for a target item prior to
screen across trials of task T3 as a function of the amount of              selection, users parallelize search and pointer movements. This
phase shift of the mouse pointer “signal”. Pictograms indicate              behavior might have two functions: First, in order to utilize
effect of shift.                                                            “slack motor time” while looking for the target, users roughly
                                                                            follow their scanpath to keep the pointer close to the searched
To analyze whether the pointer followed gaze throughout a trial,            area. Thereby they minimize the amplitude of the acquisition
we implemented a model of pointer motion in which the pointer               movement, once a target is found. Second, users might employ
was moved directly from selection point to selection point and              the pointer as an active reference tool for marking potential
remained motionless during the periods in between. This model               target items or as a reference for keeping track of the search
simulates the turn-taking strategy assumed by some assistive                area. Such “deictic” pointing behavior [Ballard et al. 1997] has
interaction techniques in which a user moves the mouse to ac-               also been observed in menu selection and web browsing tasks
quire a target but keeps it stationary while searching. Compared            [Cox and Silva, 2006; Rodden et al. 2008]. Further analyses and
against actual pointer motion the mean Euclidean distance                   experiments are required to clarify the function of the observed
across all trials and participants was significantly larger for this        behavior in the current task.
turn-taking model (95% CI difference: 238-289 px, t(10)=23.0,



                                                                       91
Our results also show that the eye reaches the targeted item prior        COX, A. L., AND SILVA, M. M. 2006. The role of mouse move-
to the pointer. This is in line with the “target gaze” behavior             ments in interactive search. In Proceedings of the 28th Annual
identified by previous studies [Smith et al. 2000; Law et al.               Meeting of the Cognitive Science Society. Lawrence Erlbaum
2004]. However, our findings also imply that the eyes fixate on             Associates, 1156-1161.
the target rather late in the pointing process if the approximate
target location is known. This gives rise to the assumption that          DREWES, H., AND SCHMIDT, A. 2009. The MAGIC Touch: Com-
eye fixations are only needed for precise targeting motions                 bining MAGIC-Pointing with a Touch-Sensitive Mouse. In
during the final pointing phase. We did not find evidence for the           Human-Computer Interaction – INTERACT 2009 (Proceed-
“following” or “switching” behaviors described by Smith and                 ings of 12th IFIP TC 13 International Conference). Springer,
colleagues [2000]. We cannot exclude the possibility that these             415-428.
behaviors may occur. In the context of search tasks, however,
our data suggest that participants consistently follow their gaze         FLEETWOOD, M. D., AND BYRNE, M. D. 2008. Modeling the
with the mouse pointer rather than the other way round.                     visual search of displays: a revised ACT-R model of icon
                                                                            search based on eye-tracking data. Human-Computer Interac-
These findings also have implications for the design of opti-               tion, 21, 2, 153–197.
mized pointing techniques, for instance, gaze-assisted tech-
niques that reduce the amplitude of the pointing movement [e.g.           HERHOLZ, S., CHUANG, L. L., TANNER, T. G., BÜLTHOFF, H. H.,
Zhai et al. 1999]. An implicit assumption of these techniques is            AND FLEMING, R.W. 2008. LibGaze Real-time gaze-tracking
that the eyes precede a pointer movement so that movement time              of freely moving observers for wall-sized displays. In Pro-
savings could be gained by moving the pointer to the point of               ceedings of the Vision, Modeling, and Visualization Work-
gaze. Results of the present study indicate that users tend to              shop (VMV). IOS Press, 101-110.
minimize pointing amplitudes with little effort by carrying out
approximate pointer motions in parallel to visual search when             LAW, B., ATKINS, S. M., KIRKPATRICK, A. E., LOMAX, A. J., AND
the target is unknown or initiating movements even before fixat-            MACKENZIE, C. L. 2004. Eye Gaze Patterns Differentiate No-
ing the target when the target location is known. Both scenarios            vice and Experts in a Virtual Laparoscopic Surgery Training
are very common in many user interface tasks. Locations of                  Environment. In Proceedings of the 2004 symposium on eye
interface elements such as window controls are often well                   tracking research & applications. ACM, 41-47.
known whereas content elements such as links on a web page
must often be searched [Rodden et al. 2008]. In fact it is hard to        MACKENZIE, S. I. 1992. Fitts' Law as a Research and Design
construct a scenario where no knowledge is available or no                 Tool in Human-Computer Interaction. Human-Computer In-
search is needed. It might therefore be reasonable to explore              teraction, 7, 1, 91-139.
other optimization techniques, for instance, techniques that do
not minimize movement amplitudes but the needed endpoint                  MCGUFFIN, M., AND BALAKRISHNAN, R. 2002. Acquisition of
precision, for instance using target-expansion [e.g. McGuffin              expanding targets. In Proceedings of CHI, ACM Press, 57-64.
and Balakrishnan 2002].
                                                                          NEGGERS, S. F. W., AND BEKKERING, H. 2000. Ocular Gaze is
5     Acknowledgements                                                      Anchored to the Target of an Ongoing Pointing Movement.
                                                                            Journal of Neurophysiology, 83, 639-651.
This project is supported by the Baden-Württemberg Informa-
                                                                          RÄIHÄ, K.-J., AND ŠPAKOV, O. 2009. Disambiguating Ninja
tion Technology program (BW-FIT) through the state of Baden-
                                                                            Cursors with Eye Gaze. In Proceedings of CHI. ACM, 1411-
Württemberg, Germany and the World Class University pro-
                                                                            1414.
gram through the National Research Foundation of Korea
funded by the Ministry of Education, Science and Technology
                                                                          RODDEN, K., FU, X., AULA, A., AND SPIRO, I. 2008. Eye-Mouse
(R31-2008-000-10008-0).
                                                                            Coordination Patterns on Web Search Results Pages. In
                                                                            Proceedings of CHI extended abstracts. ACM, 2997-3002.
References
                                                                          SMITH, B. A., HO, J., ARK, W., AND ZHAI, S. 2000. Hand Eye
BALAKRISHNAN, R. 2004. “Beating” Fitts’ law: virtual enhance-               Coordination Patterns in Target Selection. In Proceedings of
  ments for pointing facilitation. International Journal of Hu-             the 2000 symposium on eye tracking research & applications.
  man-Computer Studies, 61, 857-874.                                        ACM, 117-122.

BALLARD, D. H., HAYHOE, M. M., POOK, P. K., AND RAO, R. P. N.             VAN  DONKELAAR, P., AND STAUB, J. 2000. Eye-hand coordina-
  1997. Deictic codes for the embodiment of cognition. Beha-                tion to visual versus remembered targets. Experimental Brain
  vioral and Brain Sciences, 20, 723-767.                                   Research, 133, 414-418.

BIEG, H., CHUANG, L. L., AND REITERER, H. 2009. Gaze-Assisted             ZELINSKY, G. J., AND SHEINBERG, D. L. 1997. Eye Movements
  Pointing for Wall-Sized Displays. In Human-Computer Inte-                 During Parallel-Serial Visual Search. Journal of Experimental
  raction – INTERACT 2009 (Proceedings of 12th IFIP TC 13                   Psychology: Human Perception and Performance, 23, 1, 244-
  International Conference). Springer, 9-12.                                262.

BLANCH, R., AND ORTEGA, M. 2009. Rake cursor: improving                   ZHAI, S., MORIMOTO, C., IHDE, S. 1999. Manual and Gaze Input
  pointing performance with concurrent input channels. In Pro-              Cascaded (MAGIC) Pointing. Proceedings of CHI. ACM,
  ceedings of CHI. ACM, 1415-1418.                                          246 – 253.




                                                                     92

Weitere ähnliche Inhalte

Was ist angesagt?

Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
IJECEIAES
 
Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419
IJRAT
 
A Framework for Human Action Detection via Extraction of Multimodal Features
A Framework for Human Action Detection via Extraction of Multimodal FeaturesA Framework for Human Action Detection via Extraction of Multimodal Features
A Framework for Human Action Detection via Extraction of Multimodal Features
CSCJournals
 

Was ist angesagt? (18)

J017426467
J017426467J017426467
J017426467
 
Mollenbach Single Gaze Gestures
Mollenbach Single Gaze GesturesMollenbach Single Gaze Gestures
Mollenbach Single Gaze Gestures
 
Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
Pattern Approximation Based Generalized Image Noise Reduction Using Adaptive ...
 
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...
 
Sub1586
Sub1586Sub1586
Sub1586
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
IRJET- Comparative Study of Artificial Neural Networks and Convolutional N...
IRJET- 	  Comparative Study of Artificial Neural Networks and Convolutional N...IRJET- 	  Comparative Study of Artificial Neural Networks and Convolutional N...
IRJET- Comparative Study of Artificial Neural Networks and Convolutional N...
 
Exploring visual and motion saliency for automatic video object extraction
Exploring visual and motion saliency for automatic video object extractionExploring visual and motion saliency for automatic video object extraction
Exploring visual and motion saliency for automatic video object extraction
 
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy LogicMaximizing Strength of Digital Watermarks Using Fuzzy Logic
Maximizing Strength of Digital Watermarks Using Fuzzy Logic
 
IRJET - Deep Learning Approach to Inpainting and Outpainting System
IRJET -  	  Deep Learning Approach to Inpainting and Outpainting SystemIRJET -  	  Deep Learning Approach to Inpainting and Outpainting System
IRJET - Deep Learning Approach to Inpainting and Outpainting System
 
Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419
 
Image Enhancement by Image Fusion for Crime Investigation
Image Enhancement by Image Fusion for Crime InvestigationImage Enhancement by Image Fusion for Crime Investigation
Image Enhancement by Image Fusion for Crime Investigation
 
315 319
315 319315 319
315 319
 
IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...
IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...
IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...
 
A Framework for Human Action Detection via Extraction of Multimodal Features
A Framework for Human Action Detection via Extraction of Multimodal FeaturesA Framework for Human Action Detection via Extraction of Multimodal Features
A Framework for Human Action Detection via Extraction of Multimodal Features
 
THE EFFECT OF IMPLEMENTING OF NONLINEAR FILTERS FOR ENHANCING MEDICAL IMAGES ...
THE EFFECT OF IMPLEMENTING OF NONLINEAR FILTERS FOR ENHANCING MEDICAL IMAGES ...THE EFFECT OF IMPLEMENTING OF NONLINEAR FILTERS FOR ENHANCING MEDICAL IMAGES ...
THE EFFECT OF IMPLEMENTING OF NONLINEAR FILTERS FOR ENHANCING MEDICAL IMAGES ...
 
M017427985
M017427985M017427985
M017427985
 

Andere mochten auch

What can Leeds-based creative agency, talktojason do for you
What can Leeds-based creative agency, talktojason do for youWhat can Leeds-based creative agency, talktojason do for you
What can Leeds-based creative agency, talktojason do for you
Jason Kelly
 
Nathews_Sam_Supporting_Materials_PRNewsApp
Nathews_Sam_Supporting_Materials_PRNewsAppNathews_Sam_Supporting_Materials_PRNewsApp
Nathews_Sam_Supporting_Materials_PRNewsApp
Swnathews
 
Plan de igualdad_ikea
Plan de igualdad_ikeaPlan de igualdad_ikea
Plan de igualdad_ikea
oscargaliza
 
Acta c.i. 12_enero_2011_logo
Acta c.i. 12_enero_2011_logoActa c.i. 12_enero_2011_logo
Acta c.i. 12_enero_2011_logo
oscargaliza
 
Kahvakuulaurheilu goexpo2013-compressed2
Kahvakuulaurheilu goexpo2013-compressed2Kahvakuulaurheilu goexpo2013-compressed2
Kahvakuulaurheilu goexpo2013-compressed2
Marko Suomi
 
ParaEmpezarSeasonsandWeather
ParaEmpezarSeasonsandWeatherParaEmpezarSeasonsandWeather
ParaEmpezarSeasonsandWeather
SenoraAmandaWhite
 

Andere mochten auch (20)

What can Leeds-based creative agency, talktojason do for you
What can Leeds-based creative agency, talktojason do for youWhat can Leeds-based creative agency, talktojason do for you
What can Leeds-based creative agency, talktojason do for you
 
Hacker Space
Hacker SpaceHacker Space
Hacker Space
 
ความขัดแย้งในเมียนม่าร์
ความขัดแย้งในเมียนม่าร์ความขัดแย้งในเมียนม่าร์
ความขัดแย้งในเมียนม่าร์
 
Warsow
WarsowWarsow
Warsow
 
Nathews_Sam_Supporting_Materials_PRNewsApp
Nathews_Sam_Supporting_Materials_PRNewsAppNathews_Sam_Supporting_Materials_PRNewsApp
Nathews_Sam_Supporting_Materials_PRNewsApp
 
การแบ่งภูมิภาค
การแบ่งภูมิภาคการแบ่งภูมิภาค
การแบ่งภูมิภาค
 
Αγωγή Υγείας - Εφηβικές συζητήσεις και προβληματισμοί - Επίλυση κρίσεων
Αγωγή Υγείας - Εφηβικές συζητήσεις και προβληματισμοί - Επίλυση κρίσεωνΑγωγή Υγείας - Εφηβικές συζητήσεις και προβληματισμοί - Επίλυση κρίσεων
Αγωγή Υγείας - Εφηβικές συζητήσεις και προβληματισμοί - Επίλυση κρίσεων
 
Cd covers
Cd coversCd covers
Cd covers
 
Plan de igualdad_ikea
Plan de igualdad_ikeaPlan de igualdad_ikea
Plan de igualdad_ikea
 
Acta c.i. 12_enero_2011_logo
Acta c.i. 12_enero_2011_logoActa c.i. 12_enero_2011_logo
Acta c.i. 12_enero_2011_logo
 
SEB
SEBSEB
SEB
 
กำแพงเบอร น(สมบร ณ_)-1
กำแพงเบอร น(สมบร ณ_)-1กำแพงเบอร น(สมบร ณ_)-1
กำแพงเบอร น(สมบร ณ_)-1
 
DiViA-esitys, Personoitu digitaalinen asiakasdialogi (atBusiness)
DiViA-esitys, Personoitu digitaalinen asiakasdialogi (atBusiness)DiViA-esitys, Personoitu digitaalinen asiakasdialogi (atBusiness)
DiViA-esitys, Personoitu digitaalinen asiakasdialogi (atBusiness)
 
ทรัพยากรในทวีปยุโรป 2.4
ทรัพยากรในทวีปยุโรป 2.4ทรัพยากรในทวีปยุโรป 2.4
ทรัพยากรในทวีปยุโรป 2.4
 
Ujian koko 2013
Ujian koko 2013Ujian koko 2013
Ujian koko 2013
 
Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12
 
Kahvakuulaurheilu goexpo2013-compressed2
Kahvakuulaurheilu goexpo2013-compressed2Kahvakuulaurheilu goexpo2013-compressed2
Kahvakuulaurheilu goexpo2013-compressed2
 
Outlook Express
Outlook ExpressOutlook Express
Outlook Express
 
Wi-Foo Ninjitsu Exploitation
Wi-Foo Ninjitsu ExploitationWi-Foo Ninjitsu Exploitation
Wi-Foo Ninjitsu Exploitation
 
ParaEmpezarSeasonsandWeather
ParaEmpezarSeasonsandWeatherParaEmpezarSeasonsandWeather
ParaEmpezarSeasonsandWeather
 

Ähnlich wie Bieg Eye And Pointer Coordination In Search And Selection Tasks

Paper id 25201413
Paper id 25201413Paper id 25201413
Paper id 25201413
IJRAT
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
QUESTJOURNAL
 
Van der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawingVan der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawing
mrgazer
 

Ähnlich wie Bieg Eye And Pointer Coordination In Search And Selection Tasks (20)

Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
 
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
 
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...
 
Computer vision for interactive computer graphics
Computer vision for interactive computer graphicsComputer vision for interactive computer graphics
Computer vision for interactive computer graphics
 
Lift using projected coded light for finger tracking and device augmentation
Lift using projected coded light for finger tracking and device augmentationLift using projected coded light for finger tracking and device augmentation
Lift using projected coded light for finger tracking and device augmentation
 
RBI paper, CHI 2008
RBI paper, CHI 2008RBI paper, CHI 2008
RBI paper, CHI 2008
 
DragGan AI.pdf
DragGan AI.pdfDragGan AI.pdf
DragGan AI.pdf
 
HGR-thesis
HGR-thesisHGR-thesis
HGR-thesis
 
BTP Report.pdf
BTP Report.pdfBTP Report.pdf
BTP Report.pdf
 
Paper id 25201413
Paper id 25201413Paper id 25201413
Paper id 25201413
 
SD-miner System to Retrieve Probabilistic Neighborhood Points in Spatial Dat...
SD-miner System to Retrieve Probabilistic Neighborhood Points  in Spatial Dat...SD-miner System to Retrieve Probabilistic Neighborhood Points  in Spatial Dat...
SD-miner System to Retrieve Probabilistic Neighborhood Points in Spatial Dat...
 
HUMAN COMPUTER INTERACTION ALGORITHM BASED ON SCENE SITUATION AWARENESS
HUMAN COMPUTER INTERACTION ALGORITHM BASED ON SCENE SITUATION AWARENESSHUMAN COMPUTER INTERACTION ALGORITHM BASED ON SCENE SITUATION AWARENESS
HUMAN COMPUTER INTERACTION ALGORITHM BASED ON SCENE SITUATION AWARENESS
 
Human Computer Interaction Algorithm Based on Scene Situation Awareness
Human Computer Interaction Algorithm Based on Scene Situation Awareness Human Computer Interaction Algorithm Based on Scene Situation Awareness
Human Computer Interaction Algorithm Based on Scene Situation Awareness
 
Hand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and PythonHand Gesture Recognition using OpenCV and Python
Hand Gesture Recognition using OpenCV and Python
 
A Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand GesturesA Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand Gestures
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
 
Van der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawingVan der kamp.2011.gaze and voice controlled drawing
Van der kamp.2011.gaze and voice controlled drawing
 
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-CamHand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
 
Gesture Based Interface Using Motion and Image Comparison
Gesture Based Interface Using Motion and Image ComparisonGesture Based Interface Using Motion and Image Comparison
Gesture Based Interface Using Motion and Image Comparison
 
Object Detection and Tracking AI Robot
Object Detection and Tracking AI RobotObject Detection and Tracking AI Robot
Object Detection and Tracking AI Robot
 

Mehr von Kalle

Mehr von Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 

Bieg Eye And Pointer Coordination In Search And Selection Tasks

  • 1. Eye and Pointer Coordination in Search and Selection Tasks Hans-Joachim Bieg*, Harald Reiterer Heinrich H. Bülthoff Lewis L. Chuang, Roland W. Fleming Human-Computer Interaction Group Max Planck Institute for Biological Max Planck Institute for Biological University of Konstanz, Germany Cybernetics, Tübingen, Germany and Cybernetics, Tübingen, Germany Dept. of Brain and Cognitive Engineering, Korea University, Seoul, Korea Abstract Eye and hand coordination has been extensively studied for reaching and pointing actions with the hands [e.g. Neggers and Selecting a graphical item by pointing with a computer mouse is Bekkering 2000] but not for tasks with a computer input device a ubiquitous task in many graphical user interfaces. Several in a graphical user interface. A first study was carried out by techniques have been suggested to facilitate this task, for in- Smith et al. [2000] to examine differences in the coordination of stance, by reducing the required movement distance. Here we eyes and pointer movements when using various computer input measure the natural coordination of eye and mouse pointer devices. Smith and colleagues found that participants employed control across several search and selection tasks. We find that three different eye and hand coordination strategies when carry- users automatically minimize the distance to likely targets in an ing out simple target acquisition tasks: a) eyes leading the poin- intelligent, task dependent way. When target location is highly ter to the target (target gaze), b) eyes directly tracking the poin- predictable, top-down knowledge can enable users to initiate ter (following), and c) switching between target and pointer pointer movements prior to target fixation. These findings ques- (switching). Similar patterns were found by Law et al. [2004] tion the utility of existing assistive pointing techniques and albeit with a specialized laparoscopic tool and not with a com- suggest that alternative approaches might be more effective. mon computer pointing device. CR Categories: H.5.2 [Information Interfaces and Presentation] In the present study we investigated the coordination of eyes and User Interfaces – Input devices and strategies. H.1.2 [Models pointer movements on several selection tasks. In contrast to and Principles] User/Machine Systems – Human factors previous studies, these tasks required visual search for the target item before selection. Such a combination of search and selec- Keywords: Eye movements, Eye-tracking, Eye-hand Coordina- tion is a staple of numerous human-computer interaction tasks tion, Multimodal Interfaces, Input Devices [Fleetwood and Byrne 2008]. We wanted to see whether and how visual search affected the coordination of eye and mouse pointer movement. 1 Introduction 2 Method A fundamental task in modern computer interfaces is the selec- tion of a graphical item with a pointing device (e.g. a mouse). The difficulty of such a pointing task is well described by Fitts’ 2.1 Participants Law as the logarithm of the amplitude (distance to the target) of the movement divided by the target width (the required accura- The experiment was performed on 12 participants with normal cy) [MacKenzie 1992]. Several techniques have been developed or corrected-to-normal vision. Data from one participant was to facilitate pointing tasks by decreasing their difficulty along removed because of large eye-tracker calibration errors. All the lines of Fitts’ Law [Balakrishnan 2003]. For instance, tech- participants were regular computer and mouse users. niques exploit eye-tracking technology to move the pointer to the location of a detected eye fixation on the computer screen 2.2 Tasks [Zhai, et al. 1999; Bieg et al. 2009; Drewes and Schmidt 2009] or to select one of several pointers near the fixation point [Räihä Three tasks were presented (T1-T3) that required searching for and Špakov 2009; Blanch and Ortega 2009], thereby reducing and selecting (i.e. clicking on) one or several target items or the amplitude and thus also the difficulty of the pointing move- drag-and-drop of the target item (see Figure 1). A fourth task ment. The underlying assumption is that the item of interest is was also presented during the experiment but is not discussed fixated before it is acquired and that turn-taking occurs between here. Task T1 was a simple target acquisition task. Users started eye and pointer movement to maximize movement time savings. at a location 800 px to the right of the target item and moved a The purpose of this work is to test directly whether this assump- graphical pointer with the mouse to select it and drag it over to tion holds across a range of tasks. the circular area. Task completion was confirmed by clicking on a button labeled “done” next to the circular area. In the second *e-mail: hans-joachim.bieg@tuebingen.mpg.de task (T2) participants searched a 13 × 13 grid for a single target, specified uniquely by shape (triangle, rectangle, circle) and color (red, green, blue) before performing the same drag-and-drop task. The position of the grid on the screen was fixed and the locations of target and distractor items were randomized within the grid. In task T3, the same amount of items were scattered across the whole screen. Five target items had to be searched © 2010 Association for Computing Machinery. ACM acknowledges that this contribution and clicked on in this task. The size of each item was approx- was authored by an employee, contractor or affiliate of the German Government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this imately 20 px (3.3 cm, 1.92° visual angle). Before each trial, a article, or to allow others to do so, for Government purposes only. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 89
  • 2. screen was shown to inform participants of the current target’s trial took considerably longer (mean: 5.5 s, std. dev.: 0.60, 95% color and shape. For reasons of consistency, this was also done CI of difference: 1.5-2.3 s, t(10)=10.5, p<0.01). Fewer fixations in task T1. This preparatory screen contained an instance of the were made during the single-target task T1 (mean: 6.0, std. dev.: target item and an “ok” button which had to be clicked in order 0.78) compared to the search task T2 (mean: 14.4, std. dev.: to start the trial. For tasks T1 and T2, the button and item in- 2.37, 95% CI of difference: 6.7-10.0, t(10)=11.1, p<0.01). Thus, stance were located at a constant position to the right of the it took participants approximately 2 seconds longer to complete screen. The button was at the exact same position as the “done” task T2. This time was likely spent on the search for the target button in the trial display. The item instance was above the item and eye movements required for item identification. button. For task T3, item and button were shown in the center of the screen. With this arrangement, gaze and pointer position Since almost 40% of trial completion time seems to be spent on were at an approximate constant starting position for each trial search, how do users coordinate pointing movements during this (near the “done” button on the right side for T1 and T2 or center time? Analysis of pointer movement in task T2 shows that the of the screen for T3). pointer was already close to the target when the first fixation on the target item was made. For instance, in more than 80% of trials, the pointer had already moved half of its initial distance to the target (Figure 2 line a). For the simple selection task T1, the pointer’s distance to the target was generally larger. Still, in almost 60% of trials, the pointer had already moved more than half of the initial distance (Figure 2 line b). In task T2, the me- dian latency between the first fixation on the target item and the beginning of the pointer movement was -1.7 s. In T1, the median latency was -190 ms. In more than 60% of trials of T2 the poin- ter movement was initiated almost 1 s before the target had been fixated (Figure 3 line a). In trials of T1, the latency was shorter but still mostly negative. Figure 1 Schematic representation of the experimental tasks. 2.3 Apparatus Participants used an optical computer mouse to complete the task (Logitech Optical Wireless). The experimental computer ran Ubuntu Linux version 8.10. Participants sat 100 cm in front of a large back-projected screen which displayed the graphical stimuli at a rate of 120 Hz (resolution 1400 × 1050 px, size 94.1° × 89.1° visual angle, 2,15 × 1,97 m). While carrying out the tasks, participants’ gaze was tracked with a mobile eye- tracking system. The system allowed for unconstrained head motion by combining a head mounted eye-tracker (SR Research Eyelink II) with an infrared motion capturing system (Vicon MX 13 with four cameras) using the libGaze tracking library [Her- holz et al. 2008]. With this system, gaze was computed in terms of screen coordinates at 120 Hz. Figure 2 Cumulative histogram of the distance (as proportions 2.4 Procedure of the initial pointer-to-target distance of approximately 800 px, 68° visual angle) between the pointer and the target item at the Each experimental session lasted around 90 minutes with short time of the first fixation on the target. pauses every 15 minutes and regular recalibration of the eye- tracker before each block. Prior to data collection, the tasks were Two explanations for these findings seem probable. First, partic- explained, demonstrated to the participants, and one round of ipants moved the pointer immediately once the trial started by practice trials was presented. Each participant then carried out using knowledge from previous trials about the approximate five blocks of tasks. Each block started with 10 trials of task T1, target location (T1) or grid location (T2). This movement re- followed by 10 trials of T2, and 3 trials of task T3. quired no visual guidance and was carried out while the eye made a saccade to the target to guide the final targeting phase of 3 Results the movement. Similar findings were also reported for hand movements by van Donkelaar et al. [2000]. It is well known from experiments on visual search that search A second explanation could be that, in the case of T2, partici- for combinations of features (e.g. shape and color) take more pants parallelized search and pointer movement. To investigate time as a function of the number of items and difficulty of dis- this, we analyzed pointer movements during task T3. In this task crimination. In addition, the number of fixations is correlated target locations could not be predicted from memory. A typical with search difficulty [Zelinsky and Sheinberg 1997]. We com- example of eye scanpaths and pointer motion is shown in Figure pared trial completion times and number of fixations of tasks T1 5. Trajectories for the vertical and horizontal dimension along and T2 to see whether an effect of the added requirement to the time axis are shown in Figure 5 b and c. From the inspection search an item prior to selection was present. The mean trial of individual trial data it seems that the pointer roughly followed completion time for a simple drag-and-drop trial (T1) was 3.5 s the scanpath throughout the trial. To test this hypothesis, we (std. dev.: 0.45). In the search condition (T2) completing the 90
  • 3. calculated the mean Euclidean distance between the location of p<0.01). This indicates that participants generally moved the the mouse pointer and eye gaze across all trials of T3 for all pointer while searching and did not adopt a turn-taking strategy. participants. The mean Euclidean distance was minimal when the mouse pointer signal was shifted a few hundred milliseconds forward in time (see Figure 4). Even though the extent of the shift varied between participants (see participant means in Fig- ure 4) it was positive for all participants. A positive shift can be conceived as a shift of the mouse pointer path to the left in Figure 5 b and c. This means the mouse pointer generally lagged behind the eye movements but it does not give any clues to whether it did so consistently throughout the trial. Figure 3 Cumulative histogram of the latency between a fixation on the target and movement initiation (the point in time when the pointer was moved more than 40 px from the starting position). Figure 5 Typical eye scanpath and path of pointer motion for a single trial of task T3. Item locations are shown as gray circles (color/shape coding not shown) and target items are circles colored in red. Selection actions are depicted as blue stars. 4 Conclusion In this study, we investigated the coordination of eye move- ments and mouse pointer movements during a range of search and selection tasks. In performing these tasks, participants use two main eye-hand coordination strategies. First, when the approximate location of the target is known, participants per- form pointer movements that are initiated without eye guidance, possibly in parallel to an eye movement to the target item. Figure 4 Mean distance of the pointer to the gaze point on the Second, in tasks requiring visual search for a target item prior to screen across trials of task T3 as a function of the amount of selection, users parallelize search and pointer movements. This phase shift of the mouse pointer “signal”. Pictograms indicate behavior might have two functions: First, in order to utilize effect of shift. “slack motor time” while looking for the target, users roughly follow their scanpath to keep the pointer close to the searched To analyze whether the pointer followed gaze throughout a trial, area. Thereby they minimize the amplitude of the acquisition we implemented a model of pointer motion in which the pointer movement, once a target is found. Second, users might employ was moved directly from selection point to selection point and the pointer as an active reference tool for marking potential remained motionless during the periods in between. This model target items or as a reference for keeping track of the search simulates the turn-taking strategy assumed by some assistive area. Such “deictic” pointing behavior [Ballard et al. 1997] has interaction techniques in which a user moves the mouse to ac- also been observed in menu selection and web browsing tasks quire a target but keeps it stationary while searching. Compared [Cox and Silva, 2006; Rodden et al. 2008]. Further analyses and against actual pointer motion the mean Euclidean distance experiments are required to clarify the function of the observed across all trials and participants was significantly larger for this behavior in the current task. turn-taking model (95% CI difference: 238-289 px, t(10)=23.0, 91
  • 4. Our results also show that the eye reaches the targeted item prior COX, A. L., AND SILVA, M. M. 2006. The role of mouse move- to the pointer. This is in line with the “target gaze” behavior ments in interactive search. In Proceedings of the 28th Annual identified by previous studies [Smith et al. 2000; Law et al. Meeting of the Cognitive Science Society. Lawrence Erlbaum 2004]. However, our findings also imply that the eyes fixate on Associates, 1156-1161. the target rather late in the pointing process if the approximate target location is known. This gives rise to the assumption that DREWES, H., AND SCHMIDT, A. 2009. The MAGIC Touch: Com- eye fixations are only needed for precise targeting motions bining MAGIC-Pointing with a Touch-Sensitive Mouse. In during the final pointing phase. We did not find evidence for the Human-Computer Interaction – INTERACT 2009 (Proceed- “following” or “switching” behaviors described by Smith and ings of 12th IFIP TC 13 International Conference). Springer, colleagues [2000]. We cannot exclude the possibility that these 415-428. behaviors may occur. In the context of search tasks, however, our data suggest that participants consistently follow their gaze FLEETWOOD, M. D., AND BYRNE, M. D. 2008. Modeling the with the mouse pointer rather than the other way round. visual search of displays: a revised ACT-R model of icon search based on eye-tracking data. Human-Computer Interac- These findings also have implications for the design of opti- tion, 21, 2, 153–197. mized pointing techniques, for instance, gaze-assisted tech- niques that reduce the amplitude of the pointing movement [e.g. HERHOLZ, S., CHUANG, L. L., TANNER, T. G., BÜLTHOFF, H. H., Zhai et al. 1999]. An implicit assumption of these techniques is AND FLEMING, R.W. 2008. LibGaze Real-time gaze-tracking that the eyes precede a pointer movement so that movement time of freely moving observers for wall-sized displays. In Pro- savings could be gained by moving the pointer to the point of ceedings of the Vision, Modeling, and Visualization Work- gaze. Results of the present study indicate that users tend to shop (VMV). IOS Press, 101-110. minimize pointing amplitudes with little effort by carrying out approximate pointer motions in parallel to visual search when LAW, B., ATKINS, S. M., KIRKPATRICK, A. E., LOMAX, A. J., AND the target is unknown or initiating movements even before fixat- MACKENZIE, C. L. 2004. Eye Gaze Patterns Differentiate No- ing the target when the target location is known. Both scenarios vice and Experts in a Virtual Laparoscopic Surgery Training are very common in many user interface tasks. Locations of Environment. In Proceedings of the 2004 symposium on eye interface elements such as window controls are often well tracking research & applications. ACM, 41-47. known whereas content elements such as links on a web page must often be searched [Rodden et al. 2008]. In fact it is hard to MACKENZIE, S. I. 1992. Fitts' Law as a Research and Design construct a scenario where no knowledge is available or no Tool in Human-Computer Interaction. Human-Computer In- search is needed. It might therefore be reasonable to explore teraction, 7, 1, 91-139. other optimization techniques, for instance, techniques that do not minimize movement amplitudes but the needed endpoint MCGUFFIN, M., AND BALAKRISHNAN, R. 2002. Acquisition of precision, for instance using target-expansion [e.g. McGuffin expanding targets. In Proceedings of CHI, ACM Press, 57-64. and Balakrishnan 2002]. NEGGERS, S. F. W., AND BEKKERING, H. 2000. Ocular Gaze is 5 Acknowledgements Anchored to the Target of an Ongoing Pointing Movement. Journal of Neurophysiology, 83, 639-651. This project is supported by the Baden-Württemberg Informa- RÄIHÄ, K.-J., AND ŠPAKOV, O. 2009. Disambiguating Ninja tion Technology program (BW-FIT) through the state of Baden- Cursors with Eye Gaze. In Proceedings of CHI. ACM, 1411- Württemberg, Germany and the World Class University pro- 1414. gram through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology RODDEN, K., FU, X., AULA, A., AND SPIRO, I. 2008. Eye-Mouse (R31-2008-000-10008-0). Coordination Patterns on Web Search Results Pages. In Proceedings of CHI extended abstracts. ACM, 2997-3002. References SMITH, B. A., HO, J., ARK, W., AND ZHAI, S. 2000. Hand Eye BALAKRISHNAN, R. 2004. “Beating” Fitts’ law: virtual enhance- Coordination Patterns in Target Selection. In Proceedings of ments for pointing facilitation. International Journal of Hu- the 2000 symposium on eye tracking research & applications. man-Computer Studies, 61, 857-874. ACM, 117-122. BALLARD, D. H., HAYHOE, M. M., POOK, P. K., AND RAO, R. P. N. VAN DONKELAAR, P., AND STAUB, J. 2000. Eye-hand coordina- 1997. Deictic codes for the embodiment of cognition. Beha- tion to visual versus remembered targets. Experimental Brain vioral and Brain Sciences, 20, 723-767. Research, 133, 414-418. BIEG, H., CHUANG, L. L., AND REITERER, H. 2009. Gaze-Assisted ZELINSKY, G. J., AND SHEINBERG, D. L. 1997. Eye Movements Pointing for Wall-Sized Displays. In Human-Computer Inte- During Parallel-Serial Visual Search. Journal of Experimental raction – INTERACT 2009 (Proceedings of 12th IFIP TC 13 Psychology: Human Perception and Performance, 23, 1, 244- International Conference). Springer, 9-12. 262. BLANCH, R., AND ORTEGA, M. 2009. Rake cursor: improving ZHAI, S., MORIMOTO, C., IHDE, S. 1999. Manual and Gaze Input pointing performance with concurrent input channels. In Pro- Cascaded (MAGIC) Pointing. Proceedings of CHI. ACM, ceedings of CHI. ACM, 1415-1418. 246 – 253. 92