SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Pupil Center Detection in Low Resolution Images
                                                 Detlev Droege                                                Dietrich Paulus
                                              Active Vision Group                                          Active Vision Group
                                          University of Koblenz-Landau                                 University of Koblenz-Landau
                                            droege@uni-koblenz.de                                         paulus@uni-koblenz.de



Abstract
In some situations, high quality eye tracking systems are not afford-
able. This generates the demand for inexpensive systems built upon
non-specialized, off the shelf devices. Investigations show that al-
gorithms developed for high resolution systems do not perform sat-
isfactorily on such lowcost and low resolution systems. We inves-
tigate algorithms specifically tailored to such low resolution input
devices, based on combinations of different strategies. An approach
called gradient direction consensus is introduced and compared to
image based correlation with adaptive templates as well as other
known methods. The results are compared using synthetic input
data with known ground truth.

CR Categories:    I.4.8 [Image Processing and Computer Vi-
sion]: Scene Analysis—Tracking; I.5.4 [Pattern Recognition]:
Applications—Computer vision
                                                                                                                      Figure 1: Experimental setup.
Keywords: pupil center detection, low resolution, gaze tracking

                                                                                                   A number of algorithms has been published to determine the pupil
1     Introduction                                                                                 center in eye tracking systems like [Daunys and Ramanauskas
                                                                                                   2004], [Li et al. 2005], [Ohno et al. 2002], and others. These al-
The quality and accuracy of common eye and gaze tracking de-                                       gorithms were shown to work good on medium to high resolution
vices provides a very solid base for gaze based interaction systems.                               images, for low resolution input however they leave room for im-
However, they are usually not precise enough to select tiny screen                                 provement.
elements of common graphical user interfaces, leading to the devel-
opment of specific user interfaces for gaze interaction. Such inter-
faces are designed to be used with a much lower need for accuracy,                                 2     Related Work
not only due to the technical limitations, but also to account for
the users capabilities [Majaranta and R¨ ih¨ 2002]. Gaze interaction
                                        a a                                                        Numerous approaches to the detection of the eye in video-
systems are often used by disabled users, who, depending on the                                    oculography (video based eye trackers) have been described in var-
type and severeness of their handicaps, might also not be able to                                  ious publications. A thorough overview is given by Hansen and Ji
position their gaze as exact as would be required by conventional                                  [2009]. Most of these algorithms were developed with specific se-
user interfaces.                                                                                   tups and conditions in mind, as the usage scenarios for eye detection
                                                                                                   and tracking are quite different.
Depending on the service capability of the health care system in
different countries, affected persons might often not be funded to                                 In the context of iris recognition, close-up high resolution images
obtain a gaze tracking system. Therefore, several research groups                                  of the subjects eye are self-evident. Appropriate oculars, specific
work on systems using inexpensive off the shelf devices to compile                                 illumination and appropriate cameras deliver almost perfect images
simple gaze tracking systems like e. g. [San Agustin et al. 2009].                                 of the iris, however accept a distraction of the eye using extra il-
Such systems will perform at a significantly lower accuracy than the                                lumination. Such images were the basis for Daugman’s algorithm
established systems, but still good enough for being used with gaze                                described in [Daugman 2004]. He defines the integrodifferential
interaction systems. While the goal of using cheap (US$ 20-30)                                     operator, based on a circular integral around the currently estimated
web cams could not yet be met, system costs of less than US$ 300                                   pupil/iris center. Iteratively increasing the radius large changes in
are currently realistic. Figure 1 shows an experimental setup built                                the integral indicate the limbus and ar used to find new center esti-
using such components.                                                                             mates.
                                                                                                   In [Daunys and Ramanauskas 2004] two methods to determine the
Copyright © 2010 by the Association for Computing Machinery, Inc.                                  center of a pupil are presented. First the transitions from the pupil to
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
                                                                                                   the iris is approximated by a polynomial to determine the pupil rim
for commercial advantage and that copies bear this notice and the full citation on the             with sub-pixel accuracy. The coordinates averaging approach then
first page. Copyrights for components of this work owned by others than ACM must be                forms horizontal and vertical scan lines between corresponding rim
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         points. The pupil center is estimated by averaging the horizontal
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   and vertical mid points for the respective component. For images
permissions@acm.org.                                                                               without artifacts from e. g. glints this gives rather accurate results.
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             169
Figure 2: Low resolution pupil image (infra red, with glint), en-             Figure 3: Gradients                  Figure 4: Relevant gradients
larged to identify individual pixels                                          (red arrows are shortened)           chosen (see sect. 3.1)



For their circle approximation method the rim points are used to              For remote infrared eye trackers like our system, pupil and glint
estimate a circle. Obvious outliers to the circle fitting are weighted         centers are the key values for the gaze estimation. Their relative
down to find a good approximation after a few iterations.                      position, together with the geometry of the setup, provide the input
                                                                              for subsequent gaze estimation. While the glint is easily identified
The Starburst approach presented in [Li et al. 2005] first removes             as the brightest area in the eye image, the pupils outer form is not
any glints and makes an initial rough guess for the pupil center.             intact when the glint partly covers it. Additionally, the pupil border
From here radial rays are sent and observed for a significant jump             is not very sharp, causing further handicaps in its determination. To
in intensity. From these pupil rim points secondary rays are sent             a certain degree this can be compensated by using the iris border as
like a fan in the opposite direction in a range of ±50◦ , providing           an additional support, but it might be partly covered by the upper
additional points on the rim. Using the RANSAC algorithm, an                  (and sometimes lower) eye lid(s) (see Figure 2).
ellipse is fitted to the detected rim points.
                                                                              For low resolution images the comparison presented in [Droege
[P´ rez et al. 2003] describes a similar technique, starting from an
   e                                                                          et al. 2008] shows that none of the tested algorithms (e.g those in
initial pupil center guess, the center of gravity of pixels which are         Section 2) performs significantly better than the calculation of the
darker than a threshold. However, only primary rays are used for              center of gravity of sufficiently dark pixels. This center of gravity
the pupil rim detection, which is done by employing a Laplace filter.          though often gives a good first guess for subsequent steps as per-
If the detected points are not equidistant from the estimated center,         formed in the Starburst algorithm and others.
iteratively a new center is chosen by using the mid points of the
diagonals.                                                                    3.1   Gradient Direction Consensus

The algorithm described in [Ohno et al. 2002] is called double el-            Observation shows that in an image around the iris, a large num-
lipse fitting. First, dark, round regions give an initial guess for            ber of gradients point near the pupil/iris center (Figure 3). Most of
pupils. Rim points are determined similar to Starburst’s primary              those not pointing towards the center are caused by noise and iris
rays. An ellipse is fitted to these points and its center is used as           structure, mostly having a small strength. Other outliers are caused
starting point for a second run with a doubled number of rays, ig-            by the glint and denoted by a high gradient strength. By using a
noriong obvious outliers.                                                     lower and a higher threshold for the gradient strength all ”reason-
                                                                              ably” strong gradients are selected for further processing.
This is by far not a comprehensive list of published algorithms, sev-
eral other approaches do exist, e. g. [Poursaberi and Araabi 2005],           Using the Hessian form for straight lines we can describe any point
but throughout follow similar principles.                                     p on a line by p · n = d where n is the straight lines normal and
                                                                              d is its distance from the origin. The gradient components deter-
                                                                              mined using the Sobel operator give the gradient direction as g =
3   Approaches                                                                (gx , gy )T and thus the normal to it as ng = (−gy , gx )T . Given any
                                                                              point on the straight line, which in this case is just the coordinate
Using lowcost COTS (commercial off the shelf) devices limits the              (x, y)T where we determined the gradient, the distance dg to the
possibilities for the system setup. Common inexpensive cameras                origin can be calculated as dg = (x, y)T ·ng = y ·gx −x·gy . Thus,
have a rather large field of view, resulting in very small projections         every point p = (x, y)T on the gradient line solves p ·ng −dg = ∆
of the eye on the image sensor. While this has the advantage of               with ∆ = 0, for other points ∆ gives the distance to the line.
covering a larger range of head movements, which is desirable for
some users, the disadvantage is the very small amount of pixels                                                             ˆ
                                                                              The goal is to estimate a common center point m minimizing
forming the eye in the image as shown in Figure 2.
                                                                                                       ˆ
                                                                                                       m · n g − dg = ∆                         (1)
Given such coarse input data it is obvious that working on the level
of discrete pixels is not sufficient to achieve the desired accuracy           for all (relevant) gradients g. Expanding (1) yields
required to control a gaze based interaction system. Only some
of the published algorithms deal with sub-pixel accuracy and only                         ˆ
                                                                                      ∆ = m · ng − dg = (my − y)gx + (x − mx )gy
very few put a specific focus on this aspect. For head mounted
systems as well as for remote systems with a narrow field of view,             and the partial derivatives compute as
sub-pixel accuracy is of limited importance, which explains this
commonly neglected point. For a setup with COTS components                                  ∂∆                         ∂∆
however it is of great significance.                                                             = −gy          ,           = gx      .
                                                                                            ∂mx                        ∂my


                                                                        170
Algorithm                           Tag       RMSE
                                                                                          center of gravity                   CG11      0.1351
                                                                                          [P´ rez et al. 2003]
                                                                                             e                                PE10      0.3047
                                                                                          ellipse fit, 20◦ steps               EFS2      0.4016
                                                                                          ellipse fit, 10◦ steps               EFS1      0.4022
                                                                                          Adaptive image pattern match        PM01      0.1059
                                                                                          Gradient direction consensus        GD02      0.0501

                                                                                           Table 1: RMSE values for different algorithms


                                                                                 3.3   Ellipse Fitting
Figure 5: Example template images. Black denotes masked out
                                                                                 For comparison, an ellipse fitting approach Inspired by the algo-
pixels. The glint position is masked out.
                                                                                 rithms presented in Section 2 has been implemented. First, pupil
                                                                                 rim points are detected similar to [P´ rez et al. 2003]. To eliminate
                                                                                                                      e
                                                                                 the influence of erroneously detected points algorithms again an M-
Using an M-estimator (described e. g. in [Staudte and Sheather                   estimator is employed.
1990]), a common intersection point is determined.
Figure 4 shows the positions in the image with gradient strengths                Using 18 sample lines the result is
within the abovementioned limits. Used positions are marked in                   not satisfactory. The estimator some-
dark blue, outliers in light blue. It is apparent in this sample image,          times still includes glint border points
that the resulting center (red cross) is not ’pushed away’ by the                in the set of input samples to opti-
glint opposed to the reference point (green) calculated by the simple            mize, resulting in an ellipse distorted
center of gravity approach.                                                      by the glint.

3.2   Image Based Template Matching                                              Sending a ray every 20◦ seems rather
                                                                                 coarse, raising the number of rays to
Given the relatively small number of pixels to be analyzed direct                36 (10◦ steps) however dose not give
image comparison with an adaptable image template is possible.                   a noticeable improvement (see Sec-          Figure 6: Ellipse fitting
Such a template must reflect the situation of low resolution by tak-              tion 4).
ing into account the smooth transitions in intensities between the
different regions. That is, the generation of the template image                 4     Comparison
must be able to perform sub-pixel accuracy and anti aliasing and
blending in the drawing process.                                                 Providing reference data for eye tracking is difficult. Instructing
Choosing appropriate (initial) pupil and iris radii and luminosity               a subject to follow a point with known coordinates on a computer
values for the eye regions, an initial template is generated. By iter-           screen bears two error sources when comparing pupil detection al-
atively shifting it to different positions around the initial guess, the         gorithms: errors introduced by the gaze estimation algorithms due
position with the best match is taken for the next step with reduced             to non-linearities cannot be distinguished from detection errors and
search range and step width. This is repeated until a minimum step               due to unintentional saccades the position of the pupil center does
width, e. g. 0.1 pixel, is reached.                                              not necessarily correspond to the reference position on screen.

Of course, using appropriate parameters for the generation is es-                Given the low contrast in eye images gathered from a low resolution
sential. Therefore these parameters should be updated over time to               device it is also difficult to assign a ground truth value to real images
account for changes in the appearance of the eye. Intensity values               matching the desired accuracy. Hence, as a first mean of evaluation,
can be adapted by biasing the current values towards the actual val-             synthetic images have been generated with graphics algorithms ca-
ues found after every successful match. The pupil radius can be                  pable of producing sub-pixel accurate anti aliased output. While
adapted from the ellipse parameters determined for an analysis as                these images cannot be seen as a valid replacement for real images
described in Section 3.3.                                                        it can at least be argued that bad performance on such almost ideal
                                                                                 images will not become better in real images.
The image comparison of two images f and g of size W × H is
performed using common methods like the mean squared error or                    To distinguish between the pure pupil center detection and the sit-
by employing a cross-correlation measure.                                        uation where glints put an additional strain on the algorithm the
                                                                                 first tests have been performed without glints. As some algorithms
                                W,H
                           1    X                                                benefit from prior knowledge of the glint position (if any), an accu-
                MSE =               (fxy − gxy )2                   (2)          rate glint position detection should be performed prior to the pupil
                          W · H x,y
                                                                                 center detection and has been neglected in this work.
                                W,H
                           1    X
                                                                                 A series of 700 images with synthetic eye positions at regular sub-
                COR =               (fxy · gxy )                    (3)
                          W · H x,y                                              pixel positions was generated as test input. The algorithms from
                                                                                 Section 3 as well as the center of gravity and the algorithm de-
Producing a template image with all relevant aspects would cause a               scribed in [P´ rez et al. 2003] (which performed best in [Droege
                                                                                                e
multitude of additional parameters to be considered e. g. glint posi-            et al. 2008]) have been applied to this sequence. The Euclidean dis-
tion(s), glint radius, upper and lower lid positions and form. Since             tance to the reference coordinate is measured, the resulting rooted
most of these parameters are hard to determine and of little further             mean square error (RMSE) is listed in Table 1. The deviation of the
use it is easier to mask out these regions from the comparison (Fig-             measured positions (red dots) from the reference (light blue dots) is
ure 5).                                                                          shown in Figures 7-9.


                                                                           171
synth_01_noglint L                                                                                synth_01_noglint L                                                            synth_01_noglint L
                                                                                      223                                                                                   223                                                                     223
                                                                     EFS1                                                                                  PM01                                                                   GD02
                                                                     GNDT                                                                                  GNDT                                                                   GNDT
                                                                     RMSE: 0.4022                                                                          RMSE: 0.1059                                                           RMSE: 0.05005




                                                                                      222                                                                                   222                                                                     222




                                                                                      221                                                                                   221                                                                     221




                                                                                      220                                                                                   220                                                                     220




                                                                                      219                                                                                   219                                                                     219
    223                   224              225       226       227                  228               223                   224   225     226        227                  228       223                   224   225   226   227                   228




Figure 7: Ellipse fit sub-sampled,                                                               Figure 8: Adaptive image pattern match,                                           Figure 9: Gradient direction consensus,
10◦ steps, synthetic input                                                                      synthetic input                                                                   synthetic input


Some algorithms show periodic patterns repeating at pixel fre-                                                                                  By improving the test framework and increasing the naturalness of
quency, like in Figure 7. This is easily explained by the effects                                                                               the test image series a more thorough investigation of new and ex-
of aliasing and thresholds, but nevertheless undesirable.                                                                                       isting algorithms will be possible and lead to improved results.
The RMSE values from Table 1 seem to contradict the plots at a
first glance, but the variance for some of the other algorithms is                                                                               References
much higher resulting in a bad RMSE result. The most appeal-
ing result is found in Figure 9 for the gradient direction consensus.                                                                           DAUGMAN , J. 2004. How iris recognition works. Circuits and
This is backed by the best RMSE value, however it must be re-                                                                                     Systems for Video Technology, IEEE Transactions 14, 1, 21–30.
membered that this is measured on ’clean’ synthetic input images.                                                                               DAUNYS , G., AND R AMANAUSKAS , N. 2004. The accuracy of
Adding noise to the images reveals that the performance decreases                                                                                 eye tracking using image processing. In NordiCHI ’04, ACM,
the stronger the noise in the images is. Additionally, no real iris                                                                               New York, NY, USA, 377–380.
structure is present in the synthetic images which would introduce
noisy regions even in the case of a good input signal.                                                                                          D ROEGE , D., S CHMIDT, C., AND PAULUS , D. 2008. A compar-
                                                                                                                                                   ison of pupil center estimation algorithms. In COGAIN 2008,
                                 0.4
                                                                                                GD02,L
                                                                                                GD02,R
                                                                                                                                                   H. Istance, O. Stepankova, and R. Bates, Eds. (short paper).
                                                                                                EFB2,L
                                0.35                                                            EFB2,R
                                                                                                PE10,L
                                                                                                PE10,R                                          H ANSEN , D. W., AND J I , Q. 2009. In the eye of the beholder:
                                 0.3
                                                                                                                                                   A survey of models for eyes and gaze. IEEE Transactions on
                                0.25                                                                                                               Pattern Analysis and Machine Intelligence. (in print).
                                 0.2
                                                                                                                                                L I , D., W INFIELD , D., AND PARKHURST, D. J. 2005. Star-
                                0.15                                                                                                                burst: A hybrid algorithm for video-based eye tracking combin-
                                                                                                                                                    ing feature-based and model-based approaches. In CVPR ’05,
                                 0.1
                                                                                                                                                    IEEE, Washington, 79–86.
                                0.05

                                                                                                                                                                        ¨ ¨
                                                                                                                                                M AJARANTA , P., AND R AIH A , K.-J. 2002. Twenty years of eye
                                  0
                                       0         1         2   3                4           5     6                   7                           typing. In Proceedings of Eye Tracking Research and Applica-
                                                                                                                                                  tions, ACM, 15–22.
          Figure 10: RMSE values dependent on noise strength (σ)
                                                                                                                                                O HNO , T., M UKAWA , N., AND YOSHIKAWA , A. 2002. Freegaze:
Figure 10 shows the RMSE values for some of the algorithms as                                                                                      A gaze tracking system for everyday gaze interaction. In Pro-
a function of the strength of noise added to the synthetic images,                                                                                 ceedings of the symposium on ETRA 2002: eye tracking research
left and right eye (marked ’L’ and ’R’). The sigma of the Gaussian                                                                                 and applications symposium, 125–132.
noise is varied from 0 to 7. The labels correspond to Table 1.                                                                                    ´             ´                            ´
                                                                                                                                                P E REZ , A., C ORDOBA , M., G ARCIA , A., M E NDEZ , R., M UNOZ ,
While the subjective impression of the performance on real input                                                                                                               ´
                                                                                                                                                    M., P EDRAZA , J., AND S ANCHEZ , F. 2003. A precise eye-gaze
images looks good, no quantifiable results can be given up to now.                                                                                   detection and tracking system. In WSCG POSTERS proceedings,
Since the overall performance in a complete eye tracking system is                                                                                  February 3-7, 2003.
always influenced by the gaze estimation process such results are                                                                                P OURSABERI , A., AND A RAABI , B. 2005. A novel iris recog-
difficult to interpret and to account for the specific algorithm used.                                                                               nition system using morphological edge detector and wavelet
                                                                                                                                                   phase features. ICGST International Journal on Graphics, Vi-
5         Conclusion                                                                                                                               sion and Image Processing 05.
                                                                                                                                                S AN AGUSTIN , J., S KOVSGAARD , H., H ANSEN , J. P., AND
It is evident that a reasonable comparison must not be based on                                                                                    H ANSEN , D. W. 2009. Low-cost gaze interaction: ready to de-
RMSE values of the dislocation distance alone. Additional mea-                                                                                     liver the promises. In CHI EA ’09: 27th intl. conf. on Human fac-
sures like the variance must be used to characterize the perfor-                                                                                   tors in computing systems, ACM, New York, USA, 4453–4458.
mance. The approaches of adaptive pattern matching and gradient
direction consensus seem to be good candidates for further investi-                                                                             S TAUDTE , R. G., AND S HEATHER , S. J. 1990. Robust Estimation
gation. Their robustness to noisy signals and distortions by glints is                                                                             and Testing. Wiley, New York.
still to be investigated further, the subjective impression observed
with real input images look promising.


                                                                                                                                        172

Weitere ähnliche Inhalte

Was ist angesagt?

Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyIJAAS Team
 
Virtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaVirtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaijcseit
 
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONWAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONsipij
 
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...IRJET Journal
 
IRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET Journal
 
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...IJMER
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcscpconf
 
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...IOSR Journals
 
Definiens Tissue Studio 2.0.Pdf 1333
Definiens Tissue Studio 2.0.Pdf 1333Definiens Tissue Studio 2.0.Pdf 1333
Definiens Tissue Studio 2.0.Pdf 1333Daniel Nicolson
 
Module 1
Module 1Module 1
Module 1ushaBS2
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...sipij
 

Was ist angesagt? (17)

Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a Survey
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
 
G01114650
G01114650G01114650
G01114650
 
Virtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panoramaVirtual viewpoint three dimensional panorama
Virtual viewpoint three dimensional panorama
 
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONWAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
 
Et31962968
Et31962968Et31962968
Et31962968
 
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
 
Iw3515281533
Iw3515281533Iw3515281533
Iw3515281533
 
IRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image Processing
 
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
 
K0966468
K0966468K0966468
K0966468
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
 
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...
Brain Tumor Segmentation and Extraction of MR Images Based on Improved Waters...
 
Definiens Tissue Studio 2.0.Pdf 1333
Definiens Tissue Studio 2.0.Pdf 1333Definiens Tissue Studio 2.0.Pdf 1333
Definiens Tissue Studio 2.0.Pdf 1333
 
Module 1
Module 1Module 1
Module 1
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...
Performance of Various Order Statistics Filters in Impulse and Mixed Noise Re...
 

Andere mochten auch

מוסקבה חלק א
מוסקבה   חלק אמוסקבה   חלק א
מוסקבה חלק אhaimkarel
 
Don't Trust, And Verify - Mobile Application Attacks
Don't Trust, And Verify - Mobile Application AttacksDon't Trust, And Verify - Mobile Application Attacks
Don't Trust, And Verify - Mobile Application AttacksPrathan Phongthiproek
 
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion GuideFor Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guideimroselle
 
פרטיות ברשת האינטרנט א
פרטיות ברשת האינטרנט אפרטיות ברשת האינטרנט א
פרטיות ברשת האינטרנט אhaimkarel
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouterguest2f17d3
 
The Dynamite of Next Generation (Y) Attack
The Dynamite of Next Generation (Y) AttackThe Dynamite of Next Generation (Y) Attack
The Dynamite of Next Generation (Y) AttackPrathan Phongthiproek
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plasticatilapia69
 
Irma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A PeIrma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A Pe180590
 
Online is not Print2.0 - Part 1 - The reach paradox
Online is not Print2.0 - Part 1 - The reach paradoxOnline is not Print2.0 - Part 1 - The reach paradox
Online is not Print2.0 - Part 1 - The reach paradoxMartin Scholz
 
Roustan Cover And Filtration
Roustan Cover And FiltrationRoustan Cover And Filtration
Roustan Cover And Filtrationguest4ace713
 
Labre 2008 I Believe
Labre 2008 I BelieveLabre 2008 I Believe
Labre 2008 I BelieveMegan Hren
 
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...ZFConf Conference
 
Acta asamblea congresual parador baiona
Acta asamblea congresual parador baionaActa asamblea congresual parador baiona
Acta asamblea congresual parador baionaoscargaliza
 
Cactus Blossoms!
Cactus Blossoms!Cactus Blossoms!
Cactus Blossoms!ansiindia
 
Inspeccion ourense
Inspeccion ourenseInspeccion ourense
Inspeccion ourenseoscargaliza
 

Andere mochten auch (20)

מוסקבה חלק א
מוסקבה   חלק אמוסקבה   חלק א
מוסקבה חלק א
 
Don't Trust, And Verify - Mobile Application Attacks
Don't Trust, And Verify - Mobile Application AttacksDon't Trust, And Verify - Mobile Application Attacks
Don't Trust, And Verify - Mobile Application Attacks
 
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion GuideFor Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
 
Digit Roman
Digit RomanDigit Roman
Digit Roman
 
פרטיות ברשת האינטרנט א
פרטיות ברשת האינטרנט אפרטיות ברשת האינטרנט א
פרטיות ברשת האינטרנט א
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouter
 
The Dynamite of Next Generation (Y) Attack
The Dynamite of Next Generation (Y) AttackThe Dynamite of Next Generation (Y) Attack
The Dynamite of Next Generation (Y) Attack
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plastica
 
Mission UID
Mission UIDMission UID
Mission UID
 
Irma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A PeIrma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A Pe
 
ศิลป์
ศิลป์ศิลป์
ศิลป์
 
Online is not Print2.0 - Part 1 - The reach paradox
Online is not Print2.0 - Part 1 - The reach paradoxOnline is not Print2.0 - Part 1 - The reach paradox
Online is not Print2.0 - Part 1 - The reach paradox
 
Roustan Cover And Filtration
Roustan Cover And FiltrationRoustan Cover And Filtration
Roustan Cover And Filtration
 
การพัฒนาคุณภาพประชากรโลก
การพัฒนาคุณภาพประชากรโลกการพัฒนาคุณภาพประชากรโลก
การพัฒนาคุณภาพประชากรโลก
 
Labre 2008 I Believe
Labre 2008 I BelieveLabre 2008 I Believe
Labre 2008 I Believe
 
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...
ZFConf 2011: Behavior Driven Development в PHP и Zend Framework (Константин К...
 
Acta asamblea congresual parador baiona
Acta asamblea congresual parador baionaActa asamblea congresual parador baiona
Acta asamblea congresual parador baiona
 
Cactus Blossoms!
Cactus Blossoms!Cactus Blossoms!
Cactus Blossoms!
 
Inspeccion ourense
Inspeccion ourenseInspeccion ourense
Inspeccion ourense
 

Ähnlich wie Droege Pupil Center Detection In Low Resolution Images

IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET Journal
 
IRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A ReviewIRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A ReviewIRJET Journal
 
Iris recognition for personal identification using lamstar neural network
Iris recognition for personal identification using lamstar neural networkIris recognition for personal identification using lamstar neural network
Iris recognition for personal identification using lamstar neural networkijcsit
 
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsA Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsIOSR Journals
 
Model User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemModel User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemKalle
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
 
Visual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning ApproachesVisual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning Approachescsandit
 
NEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONNEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONcscpconf
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis reportconoranthony
 
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy Thresholding
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy ThresholdingIRJET- Retinal Structure Segmentation using Adaptive Fuzzy Thresholding
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy ThresholdingIRJET Journal
 
IRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET Journal
 
IRJET- Biometric Eye Recognition using Alex-Net
IRJET-  	  Biometric Eye Recognition using Alex-NetIRJET-  	  Biometric Eye Recognition using Alex-Net
IRJET- Biometric Eye Recognition using Alex-NetIRJET Journal
 
CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERA CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERA ijaceeejournal
 
CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERACROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERAijaceeejournal
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces StudsPlanet.com
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformINFOGAIN PUBLICATION
 
IRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET Journal
 

Ähnlich wie Droege Pupil Center Detection In Low Resolution Images (20)

IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
 
IRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A ReviewIRJET- Image Segmentation Techniques: A Review
IRJET- Image Segmentation Techniques: A Review
 
Iris recognition for personal identification using lamstar neural network
Iris recognition for personal identification using lamstar neural networkIris recognition for personal identification using lamstar neural network
Iris recognition for personal identification using lamstar neural network
 
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsA Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
 
Model User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation SystemModel User Calibration Free Remote Gaze Estimation System
Model User Calibration Free Remote Gaze Estimation System
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment
 
Visual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning ApproachesVisual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning Approaches
 
NEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONNEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTION
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis report
 
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy Thresholding
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy ThresholdingIRJET- Retinal Structure Segmentation using Adaptive Fuzzy Thresholding
IRJET- Retinal Structure Segmentation using Adaptive Fuzzy Thresholding
 
IRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition Techniques
 
IRJET- Biometric Eye Recognition using Alex-Net
IRJET-  	  Biometric Eye Recognition using Alex-NetIRJET-  	  Biometric Eye Recognition using Alex-Net
IRJET- Biometric Eye Recognition using Alex-Net
 
CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERA CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERA
 
CROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERACROWD ANALYSIS WITH FISH EYE CAMERA
CROWD ANALYSIS WITH FISH EYE CAMERA
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
IRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution Techniques
 

Mehr von Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementKalle
 

Mehr von Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head Movement
 

Droege Pupil Center Detection In Low Resolution Images

  • 1. Pupil Center Detection in Low Resolution Images Detlev Droege Dietrich Paulus Active Vision Group Active Vision Group University of Koblenz-Landau University of Koblenz-Landau droege@uni-koblenz.de paulus@uni-koblenz.de Abstract In some situations, high quality eye tracking systems are not afford- able. This generates the demand for inexpensive systems built upon non-specialized, off the shelf devices. Investigations show that al- gorithms developed for high resolution systems do not perform sat- isfactorily on such lowcost and low resolution systems. We inves- tigate algorithms specifically tailored to such low resolution input devices, based on combinations of different strategies. An approach called gradient direction consensus is introduced and compared to image based correlation with adaptive templates as well as other known methods. The results are compared using synthetic input data with known ground truth. CR Categories: I.4.8 [Image Processing and Computer Vi- sion]: Scene Analysis—Tracking; I.5.4 [Pattern Recognition]: Applications—Computer vision Figure 1: Experimental setup. Keywords: pupil center detection, low resolution, gaze tracking A number of algorithms has been published to determine the pupil 1 Introduction center in eye tracking systems like [Daunys and Ramanauskas 2004], [Li et al. 2005], [Ohno et al. 2002], and others. These al- The quality and accuracy of common eye and gaze tracking de- gorithms were shown to work good on medium to high resolution vices provides a very solid base for gaze based interaction systems. images, for low resolution input however they leave room for im- However, they are usually not precise enough to select tiny screen provement. elements of common graphical user interfaces, leading to the devel- opment of specific user interfaces for gaze interaction. Such inter- faces are designed to be used with a much lower need for accuracy, 2 Related Work not only due to the technical limitations, but also to account for the users capabilities [Majaranta and R¨ ih¨ 2002]. Gaze interaction a a Numerous approaches to the detection of the eye in video- systems are often used by disabled users, who, depending on the oculography (video based eye trackers) have been described in var- type and severeness of their handicaps, might also not be able to ious publications. A thorough overview is given by Hansen and Ji position their gaze as exact as would be required by conventional [2009]. Most of these algorithms were developed with specific se- user interfaces. tups and conditions in mind, as the usage scenarios for eye detection and tracking are quite different. Depending on the service capability of the health care system in different countries, affected persons might often not be funded to In the context of iris recognition, close-up high resolution images obtain a gaze tracking system. Therefore, several research groups of the subjects eye are self-evident. Appropriate oculars, specific work on systems using inexpensive off the shelf devices to compile illumination and appropriate cameras deliver almost perfect images simple gaze tracking systems like e. g. [San Agustin et al. 2009]. of the iris, however accept a distraction of the eye using extra il- Such systems will perform at a significantly lower accuracy than the lumination. Such images were the basis for Daugman’s algorithm established systems, but still good enough for being used with gaze described in [Daugman 2004]. He defines the integrodifferential interaction systems. While the goal of using cheap (US$ 20-30) operator, based on a circular integral around the currently estimated web cams could not yet be met, system costs of less than US$ 300 pupil/iris center. Iteratively increasing the radius large changes in are currently realistic. Figure 1 shows an experimental setup built the integral indicate the limbus and ar used to find new center esti- using such components. mates. In [Daunys and Ramanauskas 2004] two methods to determine the Copyright © 2010 by the Association for Computing Machinery, Inc. center of a pupil are presented. First the transitions from the pupil to Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed the iris is approximated by a polynomial to determine the pupil rim for commercial advantage and that copies bear this notice and the full citation on the with sub-pixel accuracy. The coordinates averaging approach then first page. Copyrights for components of this work owned by others than ACM must be forms horizontal and vertical scan lines between corresponding rim honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on points. The pupil center is estimated by averaging the horizontal servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail and vertical mid points for the respective component. For images permissions@acm.org. without artifacts from e. g. glints this gives rather accurate results. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 169
  • 2. Figure 2: Low resolution pupil image (infra red, with glint), en- Figure 3: Gradients Figure 4: Relevant gradients larged to identify individual pixels (red arrows are shortened) chosen (see sect. 3.1) For their circle approximation method the rim points are used to For remote infrared eye trackers like our system, pupil and glint estimate a circle. Obvious outliers to the circle fitting are weighted centers are the key values for the gaze estimation. Their relative down to find a good approximation after a few iterations. position, together with the geometry of the setup, provide the input for subsequent gaze estimation. While the glint is easily identified The Starburst approach presented in [Li et al. 2005] first removes as the brightest area in the eye image, the pupils outer form is not any glints and makes an initial rough guess for the pupil center. intact when the glint partly covers it. Additionally, the pupil border From here radial rays are sent and observed for a significant jump is not very sharp, causing further handicaps in its determination. To in intensity. From these pupil rim points secondary rays are sent a certain degree this can be compensated by using the iris border as like a fan in the opposite direction in a range of ±50◦ , providing an additional support, but it might be partly covered by the upper additional points on the rim. Using the RANSAC algorithm, an (and sometimes lower) eye lid(s) (see Figure 2). ellipse is fitted to the detected rim points. For low resolution images the comparison presented in [Droege [P´ rez et al. 2003] describes a similar technique, starting from an e et al. 2008] shows that none of the tested algorithms (e.g those in initial pupil center guess, the center of gravity of pixels which are Section 2) performs significantly better than the calculation of the darker than a threshold. However, only primary rays are used for center of gravity of sufficiently dark pixels. This center of gravity the pupil rim detection, which is done by employing a Laplace filter. though often gives a good first guess for subsequent steps as per- If the detected points are not equidistant from the estimated center, formed in the Starburst algorithm and others. iteratively a new center is chosen by using the mid points of the diagonals. 3.1 Gradient Direction Consensus The algorithm described in [Ohno et al. 2002] is called double el- Observation shows that in an image around the iris, a large num- lipse fitting. First, dark, round regions give an initial guess for ber of gradients point near the pupil/iris center (Figure 3). Most of pupils. Rim points are determined similar to Starburst’s primary those not pointing towards the center are caused by noise and iris rays. An ellipse is fitted to these points and its center is used as structure, mostly having a small strength. Other outliers are caused starting point for a second run with a doubled number of rays, ig- by the glint and denoted by a high gradient strength. By using a noriong obvious outliers. lower and a higher threshold for the gradient strength all ”reason- ably” strong gradients are selected for further processing. This is by far not a comprehensive list of published algorithms, sev- eral other approaches do exist, e. g. [Poursaberi and Araabi 2005], Using the Hessian form for straight lines we can describe any point but throughout follow similar principles. p on a line by p · n = d where n is the straight lines normal and d is its distance from the origin. The gradient components deter- mined using the Sobel operator give the gradient direction as g = 3 Approaches (gx , gy )T and thus the normal to it as ng = (−gy , gx )T . Given any point on the straight line, which in this case is just the coordinate Using lowcost COTS (commercial off the shelf) devices limits the (x, y)T where we determined the gradient, the distance dg to the possibilities for the system setup. Common inexpensive cameras origin can be calculated as dg = (x, y)T ·ng = y ·gx −x·gy . Thus, have a rather large field of view, resulting in very small projections every point p = (x, y)T on the gradient line solves p ·ng −dg = ∆ of the eye on the image sensor. While this has the advantage of with ∆ = 0, for other points ∆ gives the distance to the line. covering a larger range of head movements, which is desirable for some users, the disadvantage is the very small amount of pixels ˆ The goal is to estimate a common center point m minimizing forming the eye in the image as shown in Figure 2. ˆ m · n g − dg = ∆ (1) Given such coarse input data it is obvious that working on the level of discrete pixels is not sufficient to achieve the desired accuracy for all (relevant) gradients g. Expanding (1) yields required to control a gaze based interaction system. Only some of the published algorithms deal with sub-pixel accuracy and only ˆ ∆ = m · ng − dg = (my − y)gx + (x − mx )gy very few put a specific focus on this aspect. For head mounted systems as well as for remote systems with a narrow field of view, and the partial derivatives compute as sub-pixel accuracy is of limited importance, which explains this commonly neglected point. For a setup with COTS components ∂∆ ∂∆ however it is of great significance. = −gy , = gx . ∂mx ∂my 170
  • 3. Algorithm Tag RMSE center of gravity CG11 0.1351 [P´ rez et al. 2003] e PE10 0.3047 ellipse fit, 20◦ steps EFS2 0.4016 ellipse fit, 10◦ steps EFS1 0.4022 Adaptive image pattern match PM01 0.1059 Gradient direction consensus GD02 0.0501 Table 1: RMSE values for different algorithms 3.3 Ellipse Fitting Figure 5: Example template images. Black denotes masked out For comparison, an ellipse fitting approach Inspired by the algo- pixels. The glint position is masked out. rithms presented in Section 2 has been implemented. First, pupil rim points are detected similar to [P´ rez et al. 2003]. To eliminate e the influence of erroneously detected points algorithms again an M- Using an M-estimator (described e. g. in [Staudte and Sheather estimator is employed. 1990]), a common intersection point is determined. Figure 4 shows the positions in the image with gradient strengths Using 18 sample lines the result is within the abovementioned limits. Used positions are marked in not satisfactory. The estimator some- dark blue, outliers in light blue. It is apparent in this sample image, times still includes glint border points that the resulting center (red cross) is not ’pushed away’ by the in the set of input samples to opti- glint opposed to the reference point (green) calculated by the simple mize, resulting in an ellipse distorted center of gravity approach. by the glint. 3.2 Image Based Template Matching Sending a ray every 20◦ seems rather coarse, raising the number of rays to Given the relatively small number of pixels to be analyzed direct 36 (10◦ steps) however dose not give image comparison with an adaptable image template is possible. a noticeable improvement (see Sec- Figure 6: Ellipse fitting Such a template must reflect the situation of low resolution by tak- tion 4). ing into account the smooth transitions in intensities between the different regions. That is, the generation of the template image 4 Comparison must be able to perform sub-pixel accuracy and anti aliasing and blending in the drawing process. Providing reference data for eye tracking is difficult. Instructing Choosing appropriate (initial) pupil and iris radii and luminosity a subject to follow a point with known coordinates on a computer values for the eye regions, an initial template is generated. By iter- screen bears two error sources when comparing pupil detection al- atively shifting it to different positions around the initial guess, the gorithms: errors introduced by the gaze estimation algorithms due position with the best match is taken for the next step with reduced to non-linearities cannot be distinguished from detection errors and search range and step width. This is repeated until a minimum step due to unintentional saccades the position of the pupil center does width, e. g. 0.1 pixel, is reached. not necessarily correspond to the reference position on screen. Of course, using appropriate parameters for the generation is es- Given the low contrast in eye images gathered from a low resolution sential. Therefore these parameters should be updated over time to device it is also difficult to assign a ground truth value to real images account for changes in the appearance of the eye. Intensity values matching the desired accuracy. Hence, as a first mean of evaluation, can be adapted by biasing the current values towards the actual val- synthetic images have been generated with graphics algorithms ca- ues found after every successful match. The pupil radius can be pable of producing sub-pixel accurate anti aliased output. While adapted from the ellipse parameters determined for an analysis as these images cannot be seen as a valid replacement for real images described in Section 3.3. it can at least be argued that bad performance on such almost ideal images will not become better in real images. The image comparison of two images f and g of size W × H is performed using common methods like the mean squared error or To distinguish between the pure pupil center detection and the sit- by employing a cross-correlation measure. uation where glints put an additional strain on the algorithm the first tests have been performed without glints. As some algorithms W,H 1 X benefit from prior knowledge of the glint position (if any), an accu- MSE = (fxy − gxy )2 (2) rate glint position detection should be performed prior to the pupil W · H x,y center detection and has been neglected in this work. W,H 1 X A series of 700 images with synthetic eye positions at regular sub- COR = (fxy · gxy ) (3) W · H x,y pixel positions was generated as test input. The algorithms from Section 3 as well as the center of gravity and the algorithm de- Producing a template image with all relevant aspects would cause a scribed in [P´ rez et al. 2003] (which performed best in [Droege e multitude of additional parameters to be considered e. g. glint posi- et al. 2008]) have been applied to this sequence. The Euclidean dis- tion(s), glint radius, upper and lower lid positions and form. Since tance to the reference coordinate is measured, the resulting rooted most of these parameters are hard to determine and of little further mean square error (RMSE) is listed in Table 1. The deviation of the use it is easier to mask out these regions from the comparison (Fig- measured positions (red dots) from the reference (light blue dots) is ure 5). shown in Figures 7-9. 171
  • 4. synth_01_noglint L synth_01_noglint L synth_01_noglint L 223 223 223 EFS1 PM01 GD02 GNDT GNDT GNDT RMSE: 0.4022 RMSE: 0.1059 RMSE: 0.05005 222 222 222 221 221 221 220 220 220 219 219 219 223 224 225 226 227 228 223 224 225 226 227 228 223 224 225 226 227 228 Figure 7: Ellipse fit sub-sampled, Figure 8: Adaptive image pattern match, Figure 9: Gradient direction consensus, 10◦ steps, synthetic input synthetic input synthetic input Some algorithms show periodic patterns repeating at pixel fre- By improving the test framework and increasing the naturalness of quency, like in Figure 7. This is easily explained by the effects the test image series a more thorough investigation of new and ex- of aliasing and thresholds, but nevertheless undesirable. isting algorithms will be possible and lead to improved results. The RMSE values from Table 1 seem to contradict the plots at a first glance, but the variance for some of the other algorithms is References much higher resulting in a bad RMSE result. The most appeal- ing result is found in Figure 9 for the gradient direction consensus. DAUGMAN , J. 2004. How iris recognition works. Circuits and This is backed by the best RMSE value, however it must be re- Systems for Video Technology, IEEE Transactions 14, 1, 21–30. membered that this is measured on ’clean’ synthetic input images. DAUNYS , G., AND R AMANAUSKAS , N. 2004. The accuracy of Adding noise to the images reveals that the performance decreases eye tracking using image processing. In NordiCHI ’04, ACM, the stronger the noise in the images is. Additionally, no real iris New York, NY, USA, 377–380. structure is present in the synthetic images which would introduce noisy regions even in the case of a good input signal. D ROEGE , D., S CHMIDT, C., AND PAULUS , D. 2008. A compar- ison of pupil center estimation algorithms. In COGAIN 2008, 0.4 GD02,L GD02,R H. Istance, O. Stepankova, and R. Bates, Eds. (short paper). EFB2,L 0.35 EFB2,R PE10,L PE10,R H ANSEN , D. W., AND J I , Q. 2009. In the eye of the beholder: 0.3 A survey of models for eyes and gaze. IEEE Transactions on 0.25 Pattern Analysis and Machine Intelligence. (in print). 0.2 L I , D., W INFIELD , D., AND PARKHURST, D. J. 2005. Star- 0.15 burst: A hybrid algorithm for video-based eye tracking combin- ing feature-based and model-based approaches. In CVPR ’05, 0.1 IEEE, Washington, 79–86. 0.05 ¨ ¨ M AJARANTA , P., AND R AIH A , K.-J. 2002. Twenty years of eye 0 0 1 2 3 4 5 6 7 typing. In Proceedings of Eye Tracking Research and Applica- tions, ACM, 15–22. Figure 10: RMSE values dependent on noise strength (σ) O HNO , T., M UKAWA , N., AND YOSHIKAWA , A. 2002. Freegaze: Figure 10 shows the RMSE values for some of the algorithms as A gaze tracking system for everyday gaze interaction. In Pro- a function of the strength of noise added to the synthetic images, ceedings of the symposium on ETRA 2002: eye tracking research left and right eye (marked ’L’ and ’R’). The sigma of the Gaussian and applications symposium, 125–132. noise is varied from 0 to 7. The labels correspond to Table 1. ´ ´ ´ P E REZ , A., C ORDOBA , M., G ARCIA , A., M E NDEZ , R., M UNOZ , While the subjective impression of the performance on real input ´ M., P EDRAZA , J., AND S ANCHEZ , F. 2003. A precise eye-gaze images looks good, no quantifiable results can be given up to now. detection and tracking system. In WSCG POSTERS proceedings, Since the overall performance in a complete eye tracking system is February 3-7, 2003. always influenced by the gaze estimation process such results are P OURSABERI , A., AND A RAABI , B. 2005. A novel iris recog- difficult to interpret and to account for the specific algorithm used. nition system using morphological edge detector and wavelet phase features. ICGST International Journal on Graphics, Vi- 5 Conclusion sion and Image Processing 05. S AN AGUSTIN , J., S KOVSGAARD , H., H ANSEN , J. P., AND It is evident that a reasonable comparison must not be based on H ANSEN , D. W. 2009. Low-cost gaze interaction: ready to de- RMSE values of the dislocation distance alone. Additional mea- liver the promises. In CHI EA ’09: 27th intl. conf. on Human fac- sures like the variance must be used to characterize the perfor- tors in computing systems, ACM, New York, USA, 4453–4458. mance. The approaches of adaptive pattern matching and gradient direction consensus seem to be good candidates for further investi- S TAUDTE , R. G., AND S HEATHER , S. J. 1990. Robust Estimation gation. Their robustness to noisy signals and distortions by glints is and Testing. Wiley, New York. still to be investigated further, the subjective impression observed with real input images look promising. 172