To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
1. GLOBALSOFT TECHNOLOGIES
IEEE PROJECTS & SOFTWARE DEVELOPMENTS
IEEE FINAL YEAR PROJECTS|IEEE ENGINEERING PROJECTS|IEEE STUDENTS PROJECTS|IEEE
BULK PROJECTS|BE/BTECH/ME/MTECH/MS/MCA PROJECTS|CSE/IT/ECE/EEE PROJECTS
CELL: +91 98495 39085, +91 99662 35788, +91 98495 57908, +91 97014 40401
Visit: www.finalyearprojects.org Mail to:ieeefinalsemprojects@gmail.com
Attention Driven Foveated Video
Quality Assessment
2. Abstract—Contrast sensitivity of the human visual system to visual stimuli can be significantly
affected by several mechanisms, e.g., vision foveation and attention. Existing studies on
foveation based video quality assessment only take into account static foveation mechanism.
This paper first proposes an advanced foveal imaging model to generate the perceived
representation of video by integrating visual attention into the foveation mechanism. For
accurately simulating the dynamic foveation mechanism, a novel approach to predict video
fixations is proposed by mimicking the essential functionality of eye movement. Consequently,
an advanced contrast sensitivity function, derived from the attention driven foveation
mechanism, is modeled and then integrated into a wavelet-based distortion visibility measure to
build a full reference attention driven foveated video quality (AFViQ) metric. AFViQ exploits
adequately perceptual visual mechanisms in video quality assessment. Extensive evaluation
results with respect to several publicly available eye-tracking and video quality databases
demonstrate promising performance of the proposed video attention model, fixation prediction
approach, and quality metric.
3. Existing method:
Existing foveation based visual quality models both ignore an important issue, namely,
localization of fixations in image or video presentations. They often assume that the fixation
points of eyes are always located at the center of the image. This assumption might introduce
two biases in evaluating visual quality. First, the eccentricity used in the foveation based CSF is
essentially affected by the position of fixations. Second, the retinal velocity of a visual object is
determined by not only its physical velocity but also the movement of eyes. Under the
assumption that the fixation points are kept at the image center constantly, the retinal velocity
always equals to the physical velocity. This is inappropriate because an important functionality
of eye movement is to track moving objects and align them with fovea, such that they can be
observed with the highest sensitivity.
Proposed method:
In order to overcome the shortages of attention based video quality models and to appropriately integrate
the attention related mechanisms into quality assessment, an FR attention driven foveated video quality (AFViQ)
metric is proposed in this paper. The main contributions of this work include: i) a perceptual based foveal imaging
model consisting.
Merits:
the advantage of the proposed foveal imaging model lies in the exploration of fixation localization and attention
mechanism
4. Results:
Fig. Illustration of video feature maps and overall attention map. (a) video frame. (b) original
GBVS map. (c) salient motion map. (d) skin attention map. (e) “surprising” map. (f) overall
attention map.