SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Using eye tracking to investigate important cues for representative creature
                                        motion
                           Meredith McLendon ∗                      Ann McNamara†                  Tim McLaughlin‡            Ravindra Dwivedi§

                                                       Department of Visualization - Texas A&M University




Figure 1: We compared eye movements over video and point light display sequences. This image shows a single frame, from left to right, full
resolution, PLD representation, eyetracked data over full and eyetracked data over PLD. Color changes indicate the passage of time. Results
show that gaze patterns are similar over PLD and full resolution video.


Keywords: animation, perception, eyetracking, point-light display                                 Gertie the Dinosaur initiated the modern era of animated creatures.
                                                                                                  Building upon McKay’s work, animators at Walt Disney Produc-
                                                                                                  tions established the Principles of Animation, defining methods for
Abstract                                                                                          communicating a character’s form, movements, and spirit with each
                                                                                                  element equally important [Johnston and Thomas 1981]. With the
We present an experiment designed to reveal some of the key fea-                                  advent of computer animation, though the tools had changed, the
tures necessary for conveying creature motion. Humans can reliably                                Principles of Animation were retained by artists interested in repre-
identify animals shown in minimal form using Point Light Display                                  senting human and animal motion [Lasseter 1987].
(PLD) representations, but it is unclear what information they use
when doing so. The ultimate goal for this research is to find rec-                                 Computer graphics (CG) offers a variety of methods for defining
ognizable traits that may be communicated to the viewer through                                   motion including key-frame animation, data-driven action, rule-
motion, such as size and attitude and then to use that information                                based and physically-based motion. Of these, data-driven, in the
to develop a new way of creating and managing animation and an-                                   form of the Motion Capture (MoCap) of humans and animals, pro-
imation controls. The aim of this study was to investigate whether                                vides the most accurate representation of creature motion. How-
viewers use similar visual information when asked to identify or de-                              ever, MoCap technology is currently difficult to employ when the
scribe animal motion PLDs and full representations. Participants                                  interest is animal motion, particularly wild animals. Creature ani-
were shown 20 videos of 10 animals, first as PLD and then in full                                  mators often use video of animals in motion as visual reference for
resolution. After each video, participants were asked to select de-                               body posture and limb motion during locomotion. Even with ex-
scriptive traits and to identify the animal represented. Species iden-                            pansive reference material, the animator’s task of defining motion is
tification results were better than chance for six of the 10 animals                               compounded by the manner in which motion is defined in computer
when shown PLD. Results from the eye tracking show that partici-                                  animation software.
pants’ gaze was consistently drawn to similar regions when viewing
the PLD as the full representation.                                                               In computer animation, the armature, or rig, provides the mecha-
                                                                                                  nism through which artists manipulate digital creatures. A rig is
                                                                                                  expressed as pivot locations in 3D-Euclidean space around which
1     Introduction                                                                                joints may rotate with varying degrees of freedom. Together with
                                                                                                  linked joint segments of different lengths, rigs create the appear-
Artists representing the motion of creatures through moving images                                ance of points in space moving in a coordinated manner. When the
understand the importance of the coordinated motion of elements,                                  joint configuration and its motion is inspired by biological motion
both relative to one another and relative to the environment. Tal-                                and rendered surfaces are driven by these transforming points and
ented artists can impart identifiable human and animal motion char-                                the appearance of a creature in motion is presented.
acteristics to even highly abstracted, non-biological forms. Winsor
McKay’s presentation of a hand-drawn diplodocus in the 1914 film                                   Part of the challenge animators face when composing creature mo-
                                                                                                  tion is that the CG animation tool is fundamentally based upon prin-
    ∗ mgmiller@viz.tamu.edu                                                                       ciples of robotics engineering. The level of abstraction required to
    † ann@viz.tamu.edu                                                                            transform biological motion into computer animation is a significant
    ‡ timm@viz.tamu.edu                                                                           impediment. For example, animators must approach the problem of
    § ravin@viz.tamu.edu                                                                          creating a tiger’s walk by defining the position of each foot relative
                                                                                                  to the body and the other feet through the cycle of support (foot
Copyright © 2010 by the Association for Computing Machinery, Inc.                                 on ground), passing (foot off ground and behind opposite leg), high
Permission to make digital or hard copies of part or all of this work for personal or             point (foot off ground and in front of opposite leg), and again to sup-
classroom use is granted without fee provided that copies are not made or distributed             port. The ankle and knee positions can be either explicitly defined
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be               by the animator using forward kinematics or solved by the computer
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on        using inverse kinematics.
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail              An animation rig that is biometrically accurate in structural behavior
permissions@acm.org.
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             85
remains limited in its ability to facilitate believable creature locomo-        the PLDs of walking figures and showed that large scale relation-
tion because the rig is undiscriminating in regards to the relationship         ships, such as configurations of points revealing body posture, are
between how parts move and their importance to the perception of                much less informative than localized actions such as the motion of
action. An experienced animator will likely recognize that when                 the extremities (hands and feet).
creating a walking motion defining foot placement relative to hip
position is a priority over defining the relationship of the elbow to            Rather than using masking or displacement, we present an experi-
the shoulder. The animator’s decision is based upon experience with             ment that employs an eye-tracking to examine what regions of the
not only observation of motion but viewer reaction to motion. To                video draw the viewer’s gaze when viewing both standard video and
animate locomotion, an experienced animator will first define the                 PLDs of walking animals. We then compare this information to the
relationship of the feet to the ground plane, then the feet to the hips.        viewer’s success in identifying animal species and characteristics
Only when those relationships are working well will the artist’s at-            from the displays. Through this method, we aim to determine about
tention turn to the movement of the upper torso. Thus intimating                the minimal information necessary to convey certain characteristics,
that there is a hierarchy in the value of moving parts as contributors          and conversely, the characteristics that are capable of being commu-
to a character’s motion.                                                        nicated through minimal spatio-temporal information.

To build an animation system for digital creatures that is informed
by how we perceive motion, we must first understand how viewers
interpret what they see and where they find this information within
an image. Despite ever-increasing computing speed, a smart an-
imation system must be computationally fast and intuitive to the
artist. Therefore, before considering the engineering perspective of
the problem, we must first determine what kinds of information can
be communicated from the display of motion to the viewer from
minimal visual data.

2    Previous Work
When approaching the problem of creating an animation system that
is based upon the perception of biological motion, we are assisted              Figure 2: This figure represents each of the stimuli used in the ex-
by the fact that if only the joint pivot locations of a digital creature        periment. A still from the full resolution video is shown above while
are rendered the resulting moving images are visually equivalent to             the PLD is shown below.
the presentation of PLDs. By attaching small objects to joint pivot
locations and creating high visual contrast relative to other elements
of a scene Johansson [1973] developed a method for isolating mo-
tion from form as a collection of particles, now commonly known as              3   Experiment
PLDs. His methodology expanded early work on the coherence of
motion patterns. Johansson’s and subsequent studies demonstrated                Our experimental design was inspired by Mather [1993]. He used
that PLDs of human actors in motion, though presenting signifi-                  PLDs which were rotoscoped using scanned photographs from
cantly minimized visual information, carry all of the information               Muybridge’s study of animal locomotion. Like Mather, we used
necessary for the visual identification of human motion. Mather                  a list of animal species during the animal-identification task. Rather
[1993] demonstrated that viewers are capable of identifying animal              than generate PLDs from a sequence of photographs we used ani-
species in PLDs created Muybridge’s photographic motion studies                 mal motion video as the basis for our stimuli and designed a more
[1979] thus extending the range of perception of biological motion              rigorous method for dot placement, as further explained in Section
beyond the human figure.                                                         3.1.
Collections of moving points, such as PLDs, may exhibit coor-                   The stimuli for the experiment included video of 10 animals that
dinated or non-coordinated motion. Coordinated motion includes                  were shown in both full video and a PLD representation. The re-
rigid and non-rigid relationships between a point and its neigh-                sulting 20 short video clips, ranging in length from 2 to 10 seconds,
bors. Manipulating this minimal information can even affect the                 contained at least one full gait cycle (see Figure 2). Participants
perceived gender of PLD walkers. For example, exaggerating the                  first completed a short training session consisting of a single trial
movement of points representing the hips and shoulders can bias                 with meaningless data to familiarize participants with the experi-
gender recognition [Cutting et al. 1978]. If the collections of points          ment protocol. Each participant viewed the set of 10 PLD represen-
are based upon human or animal form but the presentation is in-                 tations first, followed by the set of full videos. After viewing each
verted or some elements are time-phase shifted, then recognition is             video, participants were asked to select one characteristic from each
impeded. This suggests that perception of biological motion is de-              pair shown in Table 1 that best described the video. They were then
pendent upon both the relationship of the points in motion to one               asked to identify the animal from the list of animals in Table 1.
another, and to the environment i.e. the relative velocities of lead-           PLDs were presented first to eliminate any learning effects and bias
ing and trailing points in a perceived structure and the reaction of            that could have resulted from viewing full video first. A short break
the points’ gravity [Shipley 2003].                                             occurred between the sets of videos. Presentation within type was
                                                                                randomized to eliminate any learning effects. Each video clip mea-
As discussed earlier, an experienced animator recognizes the vary-              sured 720 X 480 pixels. Video size was smaller than the viewing
ing levels of importance of moving features on a digital creature. Vi-          screen, and was therefore centered on a black background.
sual perception and cognition studies correlate with this effect. By
displacing portions of PLD of walking humans, cats, and pigeons,                Ten participants took part in the study. All had normal or corrected-
Troje [2006] showed that the local motion of the feet contained the             to-normal vision and were naive as to the purpose of the experi-
key information leading to recognition of direction of travel. Thur-            ment. Participants were seated 75 cm in front of a 22 inch LCD
man [2008] used “bubble” masks to randomly obscure portions of                  monitor in a well lit room. Using a remote infrared camera-based



                                                                           86
eye-tracking system1 , data pertaining to fixation position and sac-
cades were recorded. A fixation is defined as any pause in gaze
≥ 150ms. Participants were instructed to remain as still as possible
during calibration. However, the equipment used to track the gaze
is robust enough to quickly recalibrate the subject on reentering the
view.
                                                                               Figure 3: A basic rig was constructed to preserve rigid joint rela-
3.1   Preparing Stimuli                                                        tionships in the PLDs generated for use as stimuli.

Sequences representing the motion of animals were gathered from a
variety of sources. The primary source was Absolutely Wild Visu-               4     Results
als [AbsolutelyWildVisuals 2009], a company specializing in stock
footage of wild animals. Both stationery and moving cameras po-
sitions were included. The size of the animals and their PLD rep-              4.1   Characteristics & Species Identification
resentations on screen varied from one-third to two-thirds screen
height. All motion, apart from one sequence, was orthogonal to the                         Heavy        Light        Cat            Dog
screen.                                                                                   Predator      Prey        Horse           Deer
                                                                                            Old        Young         Ape           Giraffe
For each sequence, major joint pivot locations of the animal were
                                                                                           Large        Small      Kangaroo          Rat
identified using skeletal reference material from [Feher, 1996] and
                                                                                           Furry       Hairless      Fox          Elephant
on-line sources. The selection of important joints was driven by
                                                                                           Tailed      Tailess      Zebra           Tiger
[Cutting et al. 1978] and included marking the head, spine, shoulder,
                                                                                           Strong       Weak       Raccoon          Bear
hip, knee, ankle, wrist, and toe.
When specifying animal motion from a single 2D reference, a key                Table 1: The list of characteristics (left) and animals (right) pre-
challenge is to preserve joint lengths despite changes in spatial depth        sented to participants after each video segment. Participants se-
of the figure relative to the camera. Doing so successfully is crucial          lected characteristics which best described the stimuli just viewed.
to the representation of rigid and non-rigid relationships between
biological structures. To correctly handle this situation, we built a          In the full resolution video, as expected, participants correctly iden-
simple animation rig and relied on fixed joint lengths to preserve              tified the animal 100% of the time. It is interesting to note that 25%
rigid connections (Figure 3). Distance from the hip to the knee and            of the time people could identify the animal correctly just from the
from the knee to the ankle, and articulating chains of joints to permit        PLD representation. However, some animals proved more recog-
non-rigid relationships, such as the distance between the hip pivot            nizable in sparse form than others. The kangaroo, for example, was
and the shoulder pivot were preserved.                                         correctly identified by over half of the participants, whereas all 10
                                                                               failed to recognize the bear (not side facing) or the elephant. While
A flatly rendered sphere, the joint locator, was placed at each joint
                                                                               the orientation of the bear might cause this result, it is unclear why
pivot point. Autodesk’s Maya 2008 [Autodesk 2009] is an industry
                                                                               the elephant information does not convey the signature motion for
standard for 3D computer animation and was used to define the rigs
                                                                               the animal. It may be that the elephant shares enough structural sim-
and create the PLD animation. A 3D approach to point placement
                                                                               ilarities with other animals on the identification list that the motion
was preferred over 2D to allow us to account for depth and to render
                                                                               patterns were dynamically similar. This effect can be seen in the re-
with 3D motion blur. Full video sequences were imported as an
                                                                               sults for the horse. Only three of the 10 participants correctly identi-
image plane and used for rotoscoping the motion. The rig was built
                                                                               fied the horse, but if the other ungulate responses (deer, giraffe, and
on the image plane to maintain proportion with the footage. By
                                                                               zebra) are included, the response improves to 70% correct.
working frame by frame, we determined the best size and fit for the
rig over the entire video sequence.                                            In analyzing the trait responses, we treated the viewers’ responses to
                                                                               the full videos as correct. Only traits with a consensus from the full
For each frame, only those joint pivot points that were visible to             view were analyzed. Some traits achieved a consensus across both
the viewer in the original sequence were rendered in the PLDs. If              video sets; for instance, the tiger was described as a strong predator
the pivot point was occluded by the animal’s body, such as a far leg           in both the PLD and the full view. However, a consensus described
passing behind a near leg, or occluded by objects in the environment           the PLD polar bear as prey while also describing the full view polar
such as grass, snow, or water, the pivots locators were not displayed.         bear as a predator.
Visibility was managed on a frame-by-frame basis.
PLD sequences were presented in high contrast as black dots on                 4.2   Eye Tracking
a white background, maintaining the natural outdoor familiarity of
darker figures against a lighter environment. Joint locator size rela-          Participant’s eye movements were recorded as they performed the
tive to screen size was modulated by scaling the sphere based upon             task in both the PLD and the full resolution video. In order to com-
the figure’s distance from camera. After the PLDs were completed                pare gaze patterns across the two stimuli, video were first segmented
for all 10 videos, we took the average dot size and scaled each fig-            into individual frames. The centroid of each point in the PLD was
ure to normalize the dot size across all videos. Since the physical            chosen as the target region. Fixations were compared against each
size of the animals in the source footage varied widely this method            target region. The number of target regions varied across each ani-
maintained a consistency across the stimuli that was irrespective of           mal, depending on whether the target region was visible in the full
the animal’s size. We also animated the camera rendering the scene             video. To reveal differences across both stimuli, fixations that occur
in order to stabilize the footage and remove any camera translation            (only) in the target regions of both sets of videos were accumulated.
apparent in the original footage, allowing each clip to appear as if           The percentage fixation time within target regions for each sequence
the animal is walking on a treadmill.                                          is shown in Table 2. As can be seen from the table, the time spent
                                                                               fixating in the PLD videos is higher than in the full view videos.
   1 faceLAB R   by Seeing Machines, Inc.                                      This is to be expected due to the high-contrast, sparse nature of the



                                                                          87
5    Conclusion & Future Work
                                                                               To date, few studies have focused on the value of perception re-
                                                                               search as it applies to the generation of computer animation, partic-
                                                                               ularly biological motion. While there is strong interest in the com-
                                                                               puter graphics community in creating the next generation of anima-
                                                                               tion tools, there is no single consensus about how to best approach
Figure 4: Gaze Pattern across entire sequence for the Tiger se-                the subject. This project offers an initial approach that has the po-
quence, PLD (left), Full(right). Even though the time spend fixating            tential to satisfy the conditions of maximizing the visual result while
on each region is different, the pattern of gaze over each image is            minimizing the required input for a new system. Our investigation
not significantly different. This holds true for the image sequences.           represents an initial step toward understanding what information is
                                                                               communicated by animal PLDs and how this minimal data can be
                                                                               transformed into useful information and applied for other uses. The
                                                                               results from this preliminary study are promising and several follow
PLD videos. The correlation between the time fixating in the ’dot               up experiments are imminent. In future, we plan to include stim-
regions’ on the PLD when compared to the same regions in the full              uli displaying a wider range of motions and focus on the detection
video is rather high at 92%. This means that despite the difference            of traits across larger groups of animals as opposed to individual
in total time fixating on the target regions, there is a definite pattern        species.
that emerges in both the PLD and full resolution videos. As can be
seen in Figure 5, fixations are overlaid on the target regions (PLD             References
centroids) for all frames of both video representations.
                                                                               A BSOLUTELY W ILDV ISUALS,                                       2009.
                                                                                  http://www.absolutelywildvisuals.com.
    I MAGE       PLD      F ULL      I MAGE         PLD      F ULL
      Bear       61%       12%         Dog          75%       56%              AUTODESK, 2009. Maya.
   Elephant      40%       28%       Giraffe        94%       62%              C UTTING , J., P ROFFITT, D., AND KOZLOWSKI , L. 1978. A
      Ape        17%       28%        Horse         50%       24%                 biomechanical invariant for gait perception. J. Experimental Psy-
  KangaHop       21%       17%      KangaWalk       35%       37%                 chology: Human Perception and Performance 4, 3, 357–372.
      Tiger      49%       49%        Zebra         20%       34%
                                                                               E, M. 1979. Muybridge’s Complete Human and Animal Locomo-
Table 2: This table shows the percentage of fixations occurring                    tion. Dover.
within target regions. As expected, this number is higher for the
                                                                               FACE L AB ,     2009.          facelab        eyetracking      system,
sparse PLD representation. However, the corresponding areas in
                                                                                   http://www.seeingmachine.com.
the full resolution images also show a higher number of fixations on
target as opposed to non-target regions.                                       FAHER , G., AND S ZUNYOGHY, A. 1996. Cyclopedia Anatomicae.
                                                                                 Black Dog and Leventhal Publishers.

Gaze is drawn toward similar regions in the full resolution video as           J OHANSSON , G. 1973. Visual perception of biological motion and
in the PLD representation. Figure 5 shows overall correlation values              a model for its analysis. Perception & Psychophysics 14, 2, 201–
of the percentage time fixating in target regions between each pair of             211.
videos. There is good agreement for most of the image sequences.               J OHNSTON , O., AND T HOMAS , F. 1981. The Illusion of Life:
A t-test performed on the percentage time fixating on target regions               Disney Animation. Hyperion.
showed no significant difference between video sequence and PLD
pairs, t(9) = 1.8132, p < .05, giving further confidence of similar-            L ASSETER , J. 1987. Principles of traditional animation applied
ity between gaze distribution over both sets of data.                             to 3d computer animation. In SIGGRAPH ’87: Proceedings of
                                                                                  the 14th International Conference on Computer Graphics and
                                                                                  Interactive Techniques, ACM SIGGRAPH, 35–44.
                                                                               M ATHER , G. 1993. Recognition of animal locomotion from dy-
                                                                                 namic point light displays. Perception 22, 7, 759–766.
                                                                               P YLES , J. A., G ARCIA , J. O., H OFFMAN , D. D., AND G ROSS -
                                                                                  MAN , E. D. 2007. Visual perception and neural correlates of
                                                                                  novel ’biological motion’. Vision Research 47, 21 (Sept.), 2786–
                                                                                  2797.
                                                                               S HIPLEY, T. F. 2003. The effect of object and event orientation
                                                                                  on perception of biological motion. Psychological Science 14,
                                                                                  377–380.
                                                                               T HURMAN , S. M., AND G ROSSMAN , E. D. 2008. Temporal “bub-
                                                                                  bles” reveal key features for point-light biological motion per-
Figure 5: This graph charts the correlation between fixations on the               ception. Journal of Vision 8, 3, 1–11.
PLD and the corresponding fixations on the same locations in the                T ROJE , N. F., AND W ESTHOFF , C. 2006. The inversion effect in
full resolution video for each animal. While percentage times are                 biological motion perception: evidence for a ilfe detector. Cur-
higher in the PLD videos, correlations between the two sets of video              rent Biology 16, 8 (Apr.), 821–824.
reveal similarities in gaze patterns across presentation method.




                                                                          88

Weitere ähnliche Inhalte

Andere mochten auch

Skmbt 50112081315521 escrito conselleria
Skmbt 50112081315521 escrito conselleriaSkmbt 50112081315521 escrito conselleria
Skmbt 50112081315521 escrito conselleriaoscargaliza
 
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...apple1012
 
Ujian koko 2013
Ujian koko 2013Ujian koko 2013
Ujian koko 2013SMK BAKAI
 
Ikan dimalaysia
Ikan dimalaysiaIkan dimalaysia
Ikan dimalaysiaSMK BAKAI
 
Panfleto carrefour meridiano 2012 ii
Panfleto carrefour meridiano 2012   iiPanfleto carrefour meridiano 2012   ii
Panfleto carrefour meridiano 2012 iioscargaliza
 
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Test management
Test managementTest management
Test managementOana Feidi
 
JudCon Brazil 2014 - Mobile push for all platforms
JudCon Brazil 2014 - Mobile push for all platformsJudCon Brazil 2014 - Mobile push for all platforms
JudCon Brazil 2014 - Mobile push for all platformsDaniel Passos
 
Medidas protec integral_violencia_xenero
Medidas protec integral_violencia_xeneroMedidas protec integral_violencia_xenero
Medidas protec integral_violencia_xenerooscargaliza
 

Andere mochten auch (20)

Client Presentation
Client PresentationClient Presentation
Client Presentation
 
Sentencia it
Sentencia itSentencia it
Sentencia it
 
Skmbt 50112081315521 escrito conselleria
Skmbt 50112081315521 escrito conselleriaSkmbt 50112081315521 escrito conselleria
Skmbt 50112081315521 escrito conselleria
 
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
 
Ujian koko 2013
Ujian koko 2013Ujian koko 2013
Ujian koko 2013
 
Ikan dimalaysia
Ikan dimalaysiaIkan dimalaysia
Ikan dimalaysia
 
Digi Conv
Digi ConvDigi Conv
Digi Conv
 
Panfleto carrefour meridiano 2012 ii
Panfleto carrefour meridiano 2012   iiPanfleto carrefour meridiano 2012   ii
Panfleto carrefour meridiano 2012 ii
 
Hacker Space
Hacker SpaceHacker Space
Hacker Space
 
Brukeroppførsel
BrukeroppførselBrukeroppførsel
Brukeroppførsel
 
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
TEMA 4B Vocabulario
TEMA 4B VocabularioTEMA 4B Vocabulario
TEMA 4B Vocabulario
 
การเงิน การธนาคารและการคลัง
การเงิน การธนาคารและการคลังการเงิน การธนาคารและการคลัง
การเงิน การธนาคารและการคลัง
 
ศาสนาอิสลาม 402
ศาสนาอิสลาม 402ศาสนาอิสลาม 402
ศาสนาอิสลาม 402
 
Mission UID
Mission UIDMission UID
Mission UID
 
Test management
Test managementTest management
Test management
 
PBOPlus Introduction
PBOPlus IntroductionPBOPlus Introduction
PBOPlus Introduction
 
ความสัมพันธ์ทางเศรษฐกิจ
ความสัมพันธ์ทางเศรษฐกิจความสัมพันธ์ทางเศรษฐกิจ
ความสัมพันธ์ทางเศรษฐกิจ
 
JudCon Brazil 2014 - Mobile push for all platforms
JudCon Brazil 2014 - Mobile push for all platformsJudCon Brazil 2014 - Mobile push for all platforms
JudCon Brazil 2014 - Mobile push for all platforms
 
Medidas protec integral_violencia_xenero
Medidas protec integral_violencia_xeneroMedidas protec integral_violencia_xenero
Medidas protec integral_violencia_xenero
 

Ähnlich wie Mc Lendon Using Eye Tracking To Investigate Important Cues For Representative Creature Motion

Showcase2016_POSTER_MATH_Mar2016
Showcase2016_POSTER_MATH_Mar2016Showcase2016_POSTER_MATH_Mar2016
Showcase2016_POSTER_MATH_Mar2016Sally Tan
 
Jeremy p 5570_b_midterm
Jeremy p 5570_b_midtermJeremy p 5570_b_midterm
Jeremy p 5570_b_midtermArtSci_center
 
You.are the story - Derek Woodgate
You.are the story - Derek WoodgateYou.are the story - Derek Woodgate
You.are the story - Derek WoodgateKreativeAsia
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
Assembling Movement Scientific Motion Analysis And Studio Animation Practice
Assembling Movement  Scientific Motion Analysis And Studio Animation PracticeAssembling Movement  Scientific Motion Analysis And Studio Animation Practice
Assembling Movement Scientific Motion Analysis And Studio Animation PracticeClaudia Acosta
 
Identification of humans using gait (synopsis)
Identification of humans using gait (synopsis)Identification of humans using gait (synopsis)
Identification of humans using gait (synopsis)Mumbai Academisc
 
Automatic Skinning of the Simulated Manipulator Robot ARM
Automatic Skinning of the Simulated Manipulator Robot ARM  Automatic Skinning of the Simulated Manipulator Robot ARM
Automatic Skinning of the Simulated Manipulator Robot ARM ijcga
 
Bio-Inspired Animated Characters A Mechanistic & Cognitive View
Bio-Inspired Animated Characters A Mechanistic & Cognitive ViewBio-Inspired Animated Characters A Mechanistic & Cognitive View
Bio-Inspired Animated Characters A Mechanistic & Cognitive ViewSimpson Count
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Mixed Reality: Pose Aware Object Replacement for Alternate Realities
Mixed Reality: Pose Aware Object Replacement for Alternate RealitiesMixed Reality: Pose Aware Object Replacement for Alternate Realities
Mixed Reality: Pose Aware Object Replacement for Alternate RealitiesAlejandro Franceschi
 
Hussain Learning Relevant Eye Movement Feature Spaces Across Users
Hussain Learning Relevant Eye Movement Feature Spaces Across UsersHussain Learning Relevant Eye Movement Feature Spaces Across Users
Hussain Learning Relevant Eye Movement Feature Spaces Across UsersKalle
 
Actors And Acting In Motion Capture
Actors And Acting In Motion CaptureActors And Acting In Motion Capture
Actors And Acting In Motion CapturePedro Craggett
 
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
 
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...ArchiLab 7
 
Notes Towards Affect Engines
Notes Towards Affect EnginesNotes Towards Affect Engines
Notes Towards Affect Enginesvogmae
 
ACM ICMI Workshop 2012
ACM ICMI Workshop 2012ACM ICMI Workshop 2012
ACM ICMI Workshop 2012Lê Anh
 

Ähnlich wie Mc Lendon Using Eye Tracking To Investigate Important Cues For Representative Creature Motion (20)

Showcase2016_POSTER_MATH_Mar2016
Showcase2016_POSTER_MATH_Mar2016Showcase2016_POSTER_MATH_Mar2016
Showcase2016_POSTER_MATH_Mar2016
 
Jeremy p 5570_b_midterm
Jeremy p 5570_b_midtermJeremy p 5570_b_midterm
Jeremy p 5570_b_midterm
 
VR- virtual reality
VR- virtual realityVR- virtual reality
VR- virtual reality
 
You.are the story - Derek Woodgate
You.are the story - Derek WoodgateYou.are the story - Derek Woodgate
You.are the story - Derek Woodgate
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
printing
printingprinting
printing
 
Assembling Movement Scientific Motion Analysis And Studio Animation Practice
Assembling Movement  Scientific Motion Analysis And Studio Animation PracticeAssembling Movement  Scientific Motion Analysis And Studio Animation Practice
Assembling Movement Scientific Motion Analysis And Studio Animation Practice
 
Multimedia chapter 5
Multimedia chapter 5Multimedia chapter 5
Multimedia chapter 5
 
Identification of humans using gait (synopsis)
Identification of humans using gait (synopsis)Identification of humans using gait (synopsis)
Identification of humans using gait (synopsis)
 
Automatic Skinning of the Simulated Manipulator Robot ARM
Automatic Skinning of the Simulated Manipulator Robot ARM  Automatic Skinning of the Simulated Manipulator Robot ARM
Automatic Skinning of the Simulated Manipulator Robot ARM
 
Bio-Inspired Animated Characters A Mechanistic & Cognitive View
Bio-Inspired Animated Characters A Mechanistic & Cognitive ViewBio-Inspired Animated Characters A Mechanistic & Cognitive View
Bio-Inspired Animated Characters A Mechanistic & Cognitive View
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Mixed Reality: Pose Aware Object Replacement for Alternate Realities
Mixed Reality: Pose Aware Object Replacement for Alternate RealitiesMixed Reality: Pose Aware Object Replacement for Alternate Realities
Mixed Reality: Pose Aware Object Replacement for Alternate Realities
 
Hussain Learning Relevant Eye Movement Feature Spaces Across Users
Hussain Learning Relevant Eye Movement Feature Spaces Across UsersHussain Learning Relevant Eye Movement Feature Spaces Across Users
Hussain Learning Relevant Eye Movement Feature Spaces Across Users
 
Actors And Acting In Motion Capture
Actors And Acting In Motion CaptureActors And Acting In Motion Capture
Actors And Acting In Motion Capture
 
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
 
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...
Love, j. pasquier, p. wyvill, b.tzanetakis, g. (2011): aesthetic agents swarm...
 
Siamoc 01102001 2
Siamoc 01102001 2Siamoc 01102001 2
Siamoc 01102001 2
 
Notes Towards Affect Engines
Notes Towards Affect EnginesNotes Towards Affect Engines
Notes Towards Affect Engines
 
ACM ICMI Workshop 2012
ACM ICMI Workshop 2012ACM ICMI Workshop 2012
ACM ICMI Workshop 2012
 

Mehr von Kalle

Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 

Mehr von Kalle (20)

Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 

Mc Lendon Using Eye Tracking To Investigate Important Cues For Representative Creature Motion

  • 1. Using eye tracking to investigate important cues for representative creature motion Meredith McLendon ∗ Ann McNamara† Tim McLaughlin‡ Ravindra Dwivedi§ Department of Visualization - Texas A&M University Figure 1: We compared eye movements over video and point light display sequences. This image shows a single frame, from left to right, full resolution, PLD representation, eyetracked data over full and eyetracked data over PLD. Color changes indicate the passage of time. Results show that gaze patterns are similar over PLD and full resolution video. Keywords: animation, perception, eyetracking, point-light display Gertie the Dinosaur initiated the modern era of animated creatures. Building upon McKay’s work, animators at Walt Disney Produc- tions established the Principles of Animation, defining methods for Abstract communicating a character’s form, movements, and spirit with each element equally important [Johnston and Thomas 1981]. With the We present an experiment designed to reveal some of the key fea- advent of computer animation, though the tools had changed, the tures necessary for conveying creature motion. Humans can reliably Principles of Animation were retained by artists interested in repre- identify animals shown in minimal form using Point Light Display senting human and animal motion [Lasseter 1987]. (PLD) representations, but it is unclear what information they use when doing so. The ultimate goal for this research is to find rec- Computer graphics (CG) offers a variety of methods for defining ognizable traits that may be communicated to the viewer through motion including key-frame animation, data-driven action, rule- motion, such as size and attitude and then to use that information based and physically-based motion. Of these, data-driven, in the to develop a new way of creating and managing animation and an- form of the Motion Capture (MoCap) of humans and animals, pro- imation controls. The aim of this study was to investigate whether vides the most accurate representation of creature motion. How- viewers use similar visual information when asked to identify or de- ever, MoCap technology is currently difficult to employ when the scribe animal motion PLDs and full representations. Participants interest is animal motion, particularly wild animals. Creature ani- were shown 20 videos of 10 animals, first as PLD and then in full mators often use video of animals in motion as visual reference for resolution. After each video, participants were asked to select de- body posture and limb motion during locomotion. Even with ex- scriptive traits and to identify the animal represented. Species iden- pansive reference material, the animator’s task of defining motion is tification results were better than chance for six of the 10 animals compounded by the manner in which motion is defined in computer when shown PLD. Results from the eye tracking show that partici- animation software. pants’ gaze was consistently drawn to similar regions when viewing the PLD as the full representation. In computer animation, the armature, or rig, provides the mecha- nism through which artists manipulate digital creatures. A rig is expressed as pivot locations in 3D-Euclidean space around which 1 Introduction joints may rotate with varying degrees of freedom. Together with linked joint segments of different lengths, rigs create the appear- Artists representing the motion of creatures through moving images ance of points in space moving in a coordinated manner. When the understand the importance of the coordinated motion of elements, joint configuration and its motion is inspired by biological motion both relative to one another and relative to the environment. Tal- and rendered surfaces are driven by these transforming points and ented artists can impart identifiable human and animal motion char- the appearance of a creature in motion is presented. acteristics to even highly abstracted, non-biological forms. Winsor McKay’s presentation of a hand-drawn diplodocus in the 1914 film Part of the challenge animators face when composing creature mo- tion is that the CG animation tool is fundamentally based upon prin- ∗ mgmiller@viz.tamu.edu ciples of robotics engineering. The level of abstraction required to † ann@viz.tamu.edu transform biological motion into computer animation is a significant ‡ timm@viz.tamu.edu impediment. For example, animators must approach the problem of § ravin@viz.tamu.edu creating a tiger’s walk by defining the position of each foot relative to the body and the other feet through the cycle of support (foot Copyright © 2010 by the Association for Computing Machinery, Inc. on ground), passing (foot off ground and behind opposite leg), high Permission to make digital or hard copies of part or all of this work for personal or point (foot off ground and in front of opposite leg), and again to sup- classroom use is granted without fee provided that copies are not made or distributed port. The ankle and knee positions can be either explicitly defined for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be by the animator using forward kinematics or solved by the computer honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on using inverse kinematics. servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail An animation rig that is biometrically accurate in structural behavior permissions@acm.org. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 85
  • 2. remains limited in its ability to facilitate believable creature locomo- the PLDs of walking figures and showed that large scale relation- tion because the rig is undiscriminating in regards to the relationship ships, such as configurations of points revealing body posture, are between how parts move and their importance to the perception of much less informative than localized actions such as the motion of action. An experienced animator will likely recognize that when the extremities (hands and feet). creating a walking motion defining foot placement relative to hip position is a priority over defining the relationship of the elbow to Rather than using masking or displacement, we present an experi- the shoulder. The animator’s decision is based upon experience with ment that employs an eye-tracking to examine what regions of the not only observation of motion but viewer reaction to motion. To video draw the viewer’s gaze when viewing both standard video and animate locomotion, an experienced animator will first define the PLDs of walking animals. We then compare this information to the relationship of the feet to the ground plane, then the feet to the hips. viewer’s success in identifying animal species and characteristics Only when those relationships are working well will the artist’s at- from the displays. Through this method, we aim to determine about tention turn to the movement of the upper torso. Thus intimating the minimal information necessary to convey certain characteristics, that there is a hierarchy in the value of moving parts as contributors and conversely, the characteristics that are capable of being commu- to a character’s motion. nicated through minimal spatio-temporal information. To build an animation system for digital creatures that is informed by how we perceive motion, we must first understand how viewers interpret what they see and where they find this information within an image. Despite ever-increasing computing speed, a smart an- imation system must be computationally fast and intuitive to the artist. Therefore, before considering the engineering perspective of the problem, we must first determine what kinds of information can be communicated from the display of motion to the viewer from minimal visual data. 2 Previous Work When approaching the problem of creating an animation system that is based upon the perception of biological motion, we are assisted Figure 2: This figure represents each of the stimuli used in the ex- by the fact that if only the joint pivot locations of a digital creature periment. A still from the full resolution video is shown above while are rendered the resulting moving images are visually equivalent to the PLD is shown below. the presentation of PLDs. By attaching small objects to joint pivot locations and creating high visual contrast relative to other elements of a scene Johansson [1973] developed a method for isolating mo- tion from form as a collection of particles, now commonly known as 3 Experiment PLDs. His methodology expanded early work on the coherence of motion patterns. Johansson’s and subsequent studies demonstrated Our experimental design was inspired by Mather [1993]. He used that PLDs of human actors in motion, though presenting signifi- PLDs which were rotoscoped using scanned photographs from cantly minimized visual information, carry all of the information Muybridge’s study of animal locomotion. Like Mather, we used necessary for the visual identification of human motion. Mather a list of animal species during the animal-identification task. Rather [1993] demonstrated that viewers are capable of identifying animal than generate PLDs from a sequence of photographs we used ani- species in PLDs created Muybridge’s photographic motion studies mal motion video as the basis for our stimuli and designed a more [1979] thus extending the range of perception of biological motion rigorous method for dot placement, as further explained in Section beyond the human figure. 3.1. Collections of moving points, such as PLDs, may exhibit coor- The stimuli for the experiment included video of 10 animals that dinated or non-coordinated motion. Coordinated motion includes were shown in both full video and a PLD representation. The re- rigid and non-rigid relationships between a point and its neigh- sulting 20 short video clips, ranging in length from 2 to 10 seconds, bors. Manipulating this minimal information can even affect the contained at least one full gait cycle (see Figure 2). Participants perceived gender of PLD walkers. For example, exaggerating the first completed a short training session consisting of a single trial movement of points representing the hips and shoulders can bias with meaningless data to familiarize participants with the experi- gender recognition [Cutting et al. 1978]. If the collections of points ment protocol. Each participant viewed the set of 10 PLD represen- are based upon human or animal form but the presentation is in- tations first, followed by the set of full videos. After viewing each verted or some elements are time-phase shifted, then recognition is video, participants were asked to select one characteristic from each impeded. This suggests that perception of biological motion is de- pair shown in Table 1 that best described the video. They were then pendent upon both the relationship of the points in motion to one asked to identify the animal from the list of animals in Table 1. another, and to the environment i.e. the relative velocities of lead- PLDs were presented first to eliminate any learning effects and bias ing and trailing points in a perceived structure and the reaction of that could have resulted from viewing full video first. A short break the points’ gravity [Shipley 2003]. occurred between the sets of videos. Presentation within type was randomized to eliminate any learning effects. Each video clip mea- As discussed earlier, an experienced animator recognizes the vary- sured 720 X 480 pixels. Video size was smaller than the viewing ing levels of importance of moving features on a digital creature. Vi- screen, and was therefore centered on a black background. sual perception and cognition studies correlate with this effect. By displacing portions of PLD of walking humans, cats, and pigeons, Ten participants took part in the study. All had normal or corrected- Troje [2006] showed that the local motion of the feet contained the to-normal vision and were naive as to the purpose of the experi- key information leading to recognition of direction of travel. Thur- ment. Participants were seated 75 cm in front of a 22 inch LCD man [2008] used “bubble” masks to randomly obscure portions of monitor in a well lit room. Using a remote infrared camera-based 86
  • 3. eye-tracking system1 , data pertaining to fixation position and sac- cades were recorded. A fixation is defined as any pause in gaze ≥ 150ms. Participants were instructed to remain as still as possible during calibration. However, the equipment used to track the gaze is robust enough to quickly recalibrate the subject on reentering the view. Figure 3: A basic rig was constructed to preserve rigid joint rela- 3.1 Preparing Stimuli tionships in the PLDs generated for use as stimuli. Sequences representing the motion of animals were gathered from a variety of sources. The primary source was Absolutely Wild Visu- 4 Results als [AbsolutelyWildVisuals 2009], a company specializing in stock footage of wild animals. Both stationery and moving cameras po- sitions were included. The size of the animals and their PLD rep- 4.1 Characteristics & Species Identification resentations on screen varied from one-third to two-thirds screen height. All motion, apart from one sequence, was orthogonal to the Heavy Light Cat Dog screen. Predator Prey Horse Deer Old Young Ape Giraffe For each sequence, major joint pivot locations of the animal were Large Small Kangaroo Rat identified using skeletal reference material from [Feher, 1996] and Furry Hairless Fox Elephant on-line sources. The selection of important joints was driven by Tailed Tailess Zebra Tiger [Cutting et al. 1978] and included marking the head, spine, shoulder, Strong Weak Raccoon Bear hip, knee, ankle, wrist, and toe. When specifying animal motion from a single 2D reference, a key Table 1: The list of characteristics (left) and animals (right) pre- challenge is to preserve joint lengths despite changes in spatial depth sented to participants after each video segment. Participants se- of the figure relative to the camera. Doing so successfully is crucial lected characteristics which best described the stimuli just viewed. to the representation of rigid and non-rigid relationships between biological structures. To correctly handle this situation, we built a In the full resolution video, as expected, participants correctly iden- simple animation rig and relied on fixed joint lengths to preserve tified the animal 100% of the time. It is interesting to note that 25% rigid connections (Figure 3). Distance from the hip to the knee and of the time people could identify the animal correctly just from the from the knee to the ankle, and articulating chains of joints to permit PLD representation. However, some animals proved more recog- non-rigid relationships, such as the distance between the hip pivot nizable in sparse form than others. The kangaroo, for example, was and the shoulder pivot were preserved. correctly identified by over half of the participants, whereas all 10 failed to recognize the bear (not side facing) or the elephant. While A flatly rendered sphere, the joint locator, was placed at each joint the orientation of the bear might cause this result, it is unclear why pivot point. Autodesk’s Maya 2008 [Autodesk 2009] is an industry the elephant information does not convey the signature motion for standard for 3D computer animation and was used to define the rigs the animal. It may be that the elephant shares enough structural sim- and create the PLD animation. A 3D approach to point placement ilarities with other animals on the identification list that the motion was preferred over 2D to allow us to account for depth and to render patterns were dynamically similar. This effect can be seen in the re- with 3D motion blur. Full video sequences were imported as an sults for the horse. Only three of the 10 participants correctly identi- image plane and used for rotoscoping the motion. The rig was built fied the horse, but if the other ungulate responses (deer, giraffe, and on the image plane to maintain proportion with the footage. By zebra) are included, the response improves to 70% correct. working frame by frame, we determined the best size and fit for the rig over the entire video sequence. In analyzing the trait responses, we treated the viewers’ responses to the full videos as correct. Only traits with a consensus from the full For each frame, only those joint pivot points that were visible to view were analyzed. Some traits achieved a consensus across both the viewer in the original sequence were rendered in the PLDs. If video sets; for instance, the tiger was described as a strong predator the pivot point was occluded by the animal’s body, such as a far leg in both the PLD and the full view. However, a consensus described passing behind a near leg, or occluded by objects in the environment the PLD polar bear as prey while also describing the full view polar such as grass, snow, or water, the pivots locators were not displayed. bear as a predator. Visibility was managed on a frame-by-frame basis. PLD sequences were presented in high contrast as black dots on 4.2 Eye Tracking a white background, maintaining the natural outdoor familiarity of darker figures against a lighter environment. Joint locator size rela- Participant’s eye movements were recorded as they performed the tive to screen size was modulated by scaling the sphere based upon task in both the PLD and the full resolution video. In order to com- the figure’s distance from camera. After the PLDs were completed pare gaze patterns across the two stimuli, video were first segmented for all 10 videos, we took the average dot size and scaled each fig- into individual frames. The centroid of each point in the PLD was ure to normalize the dot size across all videos. Since the physical chosen as the target region. Fixations were compared against each size of the animals in the source footage varied widely this method target region. The number of target regions varied across each ani- maintained a consistency across the stimuli that was irrespective of mal, depending on whether the target region was visible in the full the animal’s size. We also animated the camera rendering the scene video. To reveal differences across both stimuli, fixations that occur in order to stabilize the footage and remove any camera translation (only) in the target regions of both sets of videos were accumulated. apparent in the original footage, allowing each clip to appear as if The percentage fixation time within target regions for each sequence the animal is walking on a treadmill. is shown in Table 2. As can be seen from the table, the time spent fixating in the PLD videos is higher than in the full view videos. 1 faceLAB R by Seeing Machines, Inc. This is to be expected due to the high-contrast, sparse nature of the 87
  • 4. 5 Conclusion & Future Work To date, few studies have focused on the value of perception re- search as it applies to the generation of computer animation, partic- ularly biological motion. While there is strong interest in the com- puter graphics community in creating the next generation of anima- tion tools, there is no single consensus about how to best approach Figure 4: Gaze Pattern across entire sequence for the Tiger se- the subject. This project offers an initial approach that has the po- quence, PLD (left), Full(right). Even though the time spend fixating tential to satisfy the conditions of maximizing the visual result while on each region is different, the pattern of gaze over each image is minimizing the required input for a new system. Our investigation not significantly different. This holds true for the image sequences. represents an initial step toward understanding what information is communicated by animal PLDs and how this minimal data can be transformed into useful information and applied for other uses. The results from this preliminary study are promising and several follow PLD videos. The correlation between the time fixating in the ’dot up experiments are imminent. In future, we plan to include stim- regions’ on the PLD when compared to the same regions in the full uli displaying a wider range of motions and focus on the detection video is rather high at 92%. This means that despite the difference of traits across larger groups of animals as opposed to individual in total time fixating on the target regions, there is a definite pattern species. that emerges in both the PLD and full resolution videos. As can be seen in Figure 5, fixations are overlaid on the target regions (PLD References centroids) for all frames of both video representations. A BSOLUTELY W ILDV ISUALS, 2009. http://www.absolutelywildvisuals.com. I MAGE PLD F ULL I MAGE PLD F ULL Bear 61% 12% Dog 75% 56% AUTODESK, 2009. Maya. Elephant 40% 28% Giraffe 94% 62% C UTTING , J., P ROFFITT, D., AND KOZLOWSKI , L. 1978. A Ape 17% 28% Horse 50% 24% biomechanical invariant for gait perception. J. Experimental Psy- KangaHop 21% 17% KangaWalk 35% 37% chology: Human Perception and Performance 4, 3, 357–372. Tiger 49% 49% Zebra 20% 34% E, M. 1979. Muybridge’s Complete Human and Animal Locomo- Table 2: This table shows the percentage of fixations occurring tion. Dover. within target regions. As expected, this number is higher for the FACE L AB , 2009. facelab eyetracking system, sparse PLD representation. However, the corresponding areas in http://www.seeingmachine.com. the full resolution images also show a higher number of fixations on target as opposed to non-target regions. FAHER , G., AND S ZUNYOGHY, A. 1996. Cyclopedia Anatomicae. Black Dog and Leventhal Publishers. Gaze is drawn toward similar regions in the full resolution video as J OHANSSON , G. 1973. Visual perception of biological motion and in the PLD representation. Figure 5 shows overall correlation values a model for its analysis. Perception & Psychophysics 14, 2, 201– of the percentage time fixating in target regions between each pair of 211. videos. There is good agreement for most of the image sequences. J OHNSTON , O., AND T HOMAS , F. 1981. The Illusion of Life: A t-test performed on the percentage time fixating on target regions Disney Animation. Hyperion. showed no significant difference between video sequence and PLD pairs, t(9) = 1.8132, p < .05, giving further confidence of similar- L ASSETER , J. 1987. Principles of traditional animation applied ity between gaze distribution over both sets of data. to 3d computer animation. In SIGGRAPH ’87: Proceedings of the 14th International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH, 35–44. M ATHER , G. 1993. Recognition of animal locomotion from dy- namic point light displays. Perception 22, 7, 759–766. P YLES , J. A., G ARCIA , J. O., H OFFMAN , D. D., AND G ROSS - MAN , E. D. 2007. Visual perception and neural correlates of novel ’biological motion’. Vision Research 47, 21 (Sept.), 2786– 2797. S HIPLEY, T. F. 2003. The effect of object and event orientation on perception of biological motion. Psychological Science 14, 377–380. T HURMAN , S. M., AND G ROSSMAN , E. D. 2008. Temporal “bub- bles” reveal key features for point-light biological motion per- Figure 5: This graph charts the correlation between fixations on the ception. Journal of Vision 8, 3, 1–11. PLD and the corresponding fixations on the same locations in the T ROJE , N. F., AND W ESTHOFF , C. 2006. The inversion effect in full resolution video for each animal. While percentage times are biological motion perception: evidence for a ilfe detector. Cur- higher in the PLD videos, correlations between the two sets of video rent Biology 16, 8 (Apr.), 821–824. reveal similarities in gaze patterns across presentation method. 88