SlideShare ist ein Scribd-Unternehmen logo
1 von 13
Downloaden Sie, um offline zu lesen
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
11
A NEW FACE RECOGNITION SCHEME FOR FACES WITH EXPRESSIONS,
GLASSES AND ROTATION
Walaa M Abdel-Hafiez1
, Mohamed Heshmat2
, Moheb Girgis3
, Seham Elaw4
1, 2, 4
Faculty of Science, Mathematical and Computer Science Department,
Sohag University, 82524, Sohag, Egypt
3
Faculty of Science, Department of Computer Science,
Minia University, El-Minia, Egypt
ABSTRACT
Face recognition is considered as one of the hottest research areas in computer vision field.
The purpose of the proposed research work is to develop an algorithm that can recognize a person by
comparing the characteristics of his/her face, which may have expressions, glasses and/or rotation, to
those of known faces in a database. This work provides a simple and efficient technique to recognize
human faces. The new method is based on variance estimation of the three components of the color
faces images and facial features extraction of the most facial features. The features under
consideration are eyes, nose and mouth. The technique used to extract facial features was developed
based on feature location with respect to face dimensions. The proposed algorithm has been tested on
various face images and its performance was found to be good in most cases. Experimental results
show that our method of human face recognition achieves very encouraging results with good
accuracy, great speed and simple computations.
Keywords: Face Recognition, Facial Features Extraction, Color Spaces, Variance Estimation.
I. INTRODUCTION
Face recognition has been used in various applications where personal identification is
required like, visual attendance systems where student identification and recognition are achieved
through face recognition. Face recognition [1-4] has been used also in gaming applications, security
systems, credit-card verification, criminal identifications, and teleconference, and short face
recognition applications are widely used in many corporate and educational institutions.
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING &
TECHNOLOGY (IJCET)
ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 5, Issue 4, April (2014), pp. 11-23
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2014): 8.5328 (Calculated by GISI)
www.jifactor.com
IJCET
© I A E M E
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
12
Human faces are complex objects with features that can vary over time. However, humans
have a natural ability to recognize faces and identify persons with just a glance. Our natural
recognition ability also extends beyond face recognition, where we are equally able to quickly
recognizing patterns, sounds or smells. Unfortunately, this ability does not exist in machines, thus the
need to simulate recognition artificially in our attempts to create intelligent autonomous machines.
Facial feature recognition is an example of popular applications for artificial intelligence systems.
Face recognition by machines has various important applications in real life, such as, electronic and
physical access control, national defense and international security. Simulating our face recognition
natural ability in machines is difficult but not impossible. Throughout our lifetime, many faces are
seen and stored naturally in our memories which forming a kind of database. Machine recognition of
faces requires also a database, which consists of facial images that may include different face images
of the same person. The development of intelligent face recognition systems requires providing
sufficient information and meaningful data during machine learning of a face.
Face Recognition can be defined as the visual perception of familiar faces or the biometric
identification by scanning a person's face and matching it against a database of known faces. In both
definitions, the faces can be identified as familiar or known faces.
One of the main challenging problems in building an automated system that perform face
recognition and verification tasks is face detection and facial feature extraction. Though people are
good at face identification, recognizing the human faces automatically by computer is a very difficult
task. Face recognition is influenced by many complications, such as the differences of facial
expression, the light directions of imaging, and the variety of posture, size and angle. Even for the
same person, the images taken in a different surrounding condition may be unlike. The problem is so
complicated that the achievement in the field of automatic face recognition by computer is not as
satisfied as the finger prints [5].
The objective of facial feature localization is to detect the presence and location of features
after the locations of faces are extracted by using any face detection method. The challenges
associated with face and facial feature detection methods can be attributed to the following
factors [6]:
• Intensity: There are three types of intensity: color, gray, and binary.
• Pose: Face images vary due to the relative camera-face pose (frontal, 45º, profile), and some
facial features such as an eye may become partially or wholly occluded.
• Structural components: Facial features such as beards, mustaches, and glasses may or may not
be present.
• Image rotation: Face images directly vary for different rotations.
• Poor quality: Image intensity in poor-quality images, for instance, blurry images, distorted
images, and images with noise, becomes unusual.
• Facial expression: The appearance of faces depends on a personal facial expression.
• Unnatural intensity: Cartoon faces and rendered faces from 3D model have unnatural intensity.
• Occlusion: Faces may be partially occluded by other objects such as hand, scarf, etc.
• Illumination: Face images vary due to the position of light source [6].
Phimoltares et al. [6] presented algorithms for all types of face images in the presence of
several image conditions. There are two main stages in their method. In the first stage, the faces are
detected from an original image by using Canny edge detection and their proposed average face
templates. Second, a proposed neural visual model (NVM) is used to recognize all possibilities of
facial feature positions. Input parameters are obtained from the positions of facial features and the
face characteristics that are low sensitive to intensity change. Finally, to improve the results, image
dilation is applied for removing some irrelevant regions.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
13
Nikolaidis and Pitas [7] proposed a combined approach for facial feature extraction and
determination of gaze at direction that employs some improved variations of the adaptive hough
transforms for curve detection, minima analysis of feature candidates, template matching for inner
facial feature localization, active contour models for inner face contour detection and projective
geometry properties for accurate pose determination.
Koo and Song [8] suggested defining 20 facial features. Their method detects the facial
candidate regions by haar classifier, and detects eye candidate region and extracts eye features by
dilate operation, then detect lip candidate region using the features. The relative color difference of
a* in the L*a*b* color space was used to extract lip feature and to detect nose candidate region and
detected 20 features from 2D image by analyzing end of nose.
Yen and Nithianandan [9] presented an automatic facial feature extraction method based on
the edge density distribution of the image. In the preprocessing stage, a face is approximated to an
ellipse, and a genetic algorithm is applied to search for the best ellipse region match. In the feature
extraction stage, a genetic algorithm is applied to extract the facial features, such as the eyes, nose
and mouth, in the predefined sub regions.
Gu et al. [5] proposed a method to extract the feature points from faces automatically. It
provided a feasible way to locate the positions of the two eyeballs, near and far corners of eyes,
midpoint of nostrils and mouth corners from face image.
Srivastava [10] proposed an efficient algorithm for facial expression recognition system,
which performs facial expression analysis in a near real time from a live web cam feed. The system is
composed of two different entities: trainer and evaluator. Each frame of video feed is passed through
a series of steps, including Haar classifiers, skin detection, feature extraction, feature point tracking,
creating a learned support vector machine model to classify emotions to achieve a tradeoff between
accuracy and result rate.
Radha and Nallammal [11] described a comparative analysis of face recognition methods:
principle component analysis (PCA), linear discriminant analysis (LDA) and independent component
analysis (ICA) based on curvelet transform. The algorithms are tested on ORL Database.
Kumar et al. [12] presented an automated system for human face recognition in a real time
background world for a large homemade dataset of persons' faces. To detect real time human face
AdaBoost with Haar cascade is used and a simple fast PCA and LDA are used to recognize the faces
detected. The matched face is then used to mark attendance in the laboratory, in their case.
El-Bashir [13] introduced a method for face recognition. After a preprocessing and
normalization stage to the image, PCA is applied to recognize a specified face. If the face is not
recognized correctly, then more features are extracted: face color and moment invariant. The face is
recognized again using decision tree.
Javed [14] proposed a computer system that can recognize a person by comparing the
characteristics of face to those of known individuals. He focused on frontal two-dimensional images
that have been taken in a controlled environment, i.e. the illumination and the background were
constant, and used the PCA technique. The system gives good results especially with angled face
views.
Pattanasethanon and Savithi [15] presented a novel technique for facial recognition through
the implementation of successes mean quantization transform and spare network of winnow with the
assistance of eigenface computation. After having limited the frame of the input image or images
from web-cam, the image has cropped into an oval or ellipse shape. Then the image is transformed
into grey scale color and is normalized in order to reduce color complexities. They also focused on
the special characteristics of human facial aspects such as nostril areas and oral areas, compared the
images obtained by web-cam with images in database, and have good accuracy with low time.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
14
Wu et al. [16] presented a system that can automatically remove eyeglasses from an input
face image. The system consists of three modules: eyeglasses recognition, localization and removal.
Given a face image, first, an eyeglasses classifier is used to determine if a pair of eyeglasses is
present. Then, a Morkov chain Monte Carlo method is applied to locate the glasses by searching for
the global optimum of the posteriori. Finally, a novel example based approach has been developed to
synthesize an image with eyeglasses removed from the detected and localized face image. The
experiments demonstrated that their approach produces good quality of face images with eyeglasses
removed.
Chen and Gao [17] presented a local attributed string matching (LAStrM) approach to
recognize face profiles in the presence of interferences. The conventional profile recognition
algorithms heavily depend on the accuracy of the facial area cropping. However, in realistic scenarios
the facial area may be difficult to localize due to interferences (e.g., glasses, hairstyles). The proposed
approach is able to efficiently find the most discriminative local parts between face profiles
addressing the recognition problem with interferences. Experimental results have shown that the
proposed matching scheme is robust to interferences compared against several primary approaches
using two profile image databases (Bern and FERET).
This paper presents a new face recognition method for faces with expressions, glasses and/or
rotation. The proposed method uses variance estimation of RGB components to compare the
extracted faces and the faces in the database used in comparison. In addition, Euclidean distance of
facial features of the extracted faces from test image and faces extracted from the database after a
variance test is used.
The rest of this paper is organized as follows. Section II describes the methodology of the
proposed method with its stages: variance estimation, feature extraction, method representation and
the proposed algorithm. Section III presents the results and method analysis. Section IV draws the
conclusion of this work and possible points for future work.
II. METHODOLOGY
The face and facial feature detection algorithms are applied to detect generic faces from
several face images. Most automatic face recognition approaches are based on frontal images. Facial
profiles, on the other hand, provide complementary information of the face that is not present in
frontal faces. Fusion of frontal and profile views makes the overall personal identification technique
foolproof and efficient.
The proposed face recognition method is based on the average variance estimation of the
three components of RGB faces images, and the extraction of the most facial features. The features
under consideration are eyes, nose and mouth. The technique used to extract facial features is based
on feature location with respect to the dimensions of the face image.
Given a face image, which obtained from a camera or preprocessed previously, our goal is to
identify this face image using a database of known humans' faces. Therefore, our algorithm is divided
into three main steps. First: variance estimation of faces images. Second: facial feature extraction,
an effective method to extract facial features like eyes, nose and mouth depending on their locations
with respect to the face region is used, which we have developed before in [18]. Third: similar face
identification or image searching; the goal of this step is to scan the database of known faces to find
the most similar faces to the test face.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
15
1. Variance Estimation
Variance calculation is a very light calculation and considered as an important constraint to
prove similarity between two images. Let x be a vector of dimension n, the variance of x can be
calculated as follows:
n
xix
n
i
∑ −
= =1
)(
var
2
, (1)
where x is the mean value of x .
However, it is not necessary that the two images which have the same variance to have the
same contents. Different images may have the same value of variance because variance estimation is
totally depending on the values of image pixels and their mean value. So the variance is used at first
to filter the database of faces and extract faces that have the same or close value to variance of the
input face image, then another test is required to detect the most similar faces to this test face [18].
When working with RGB color images, there are three values for each pixel in the image,
representing the red, green, and blue components. To compute the variance of RGB image, the
variance for each color is calculated separately. So there are three values for variance, one for the red
values, another for the green values and third for the blue values [18], which are calculated as
follows:
2 2 2
1 1 1
( ) ( ) ( )
, , ,
n n n
r r g g b b
i i i
red green blue
x x x x x x
v v v
n n n
= = =
− − −
= = =
∑ ∑ ∑
(2)
To simplify the comparison, the average of the three values is computed as follows:
( )
,
3
red green bluev v v
v
+ +
= (3)
2. Facial Features Extraction
In this part of work, the aim is to compare two color faces to decide whether they both belong
to the same person or not and detect the similarity between them using Euclidean distance. RGB
(Red, Green and Blue) color space, fig. 1, which is used here, is an additive color system based on
tri-chromatic theory. It is often found in systems that use a CRT to display images. The RGB color
system is very common, and is being used in virtually every computer system as well as television,
video etc [19], [20].
Red Green Blue
Figure 1: RGB color model
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
16
In RGB color model, any source color (F) can be matched by a linear combination of three
color primaries, i.e. Red, Green and Blue, provided that none of those three can be matched by a
combination of the other two, see fig. 1.
Here, F can be represented as:
,F r R gG bB= + + (4)
where r, g and b are scalars indicating how much of each of the three primaries (R, G and B) are
contained in F. The normalized form of F can be as follows:
' ' ' ,F R R G G B B= + + (5)
where
' / ( ) ,
' / ( ) , (6)
' / ( ) ,
R r r g b
G g r g b
B b r g b
= + +
= + +
= + +
To extract facial features, we used our method proposed in [18], which is based on feature
location with respect to the whole face region. By detecting the candidate regions of left eye, right
eye, nose and mouth, by training, then applying the obtained dimensions of each region on several
other faces with the same size, the results were very good, as shown in fig. 2.
Given a face image of 200 pixels height and 200 pixel width, after training with a lot of
images, we found that the candidate region of eyes is located between rows 60 and 95, columns 25
and 80 for right eye and columns 115 and 170 for left eye. The candidate region for the nose is
located between rows 110 and 145 and columns 75 and 125 and the candidate region for the mouth is
located between rows 145 and 185 and columns 60 and 135. When applying the dimensions obtained
by training on many face images, we found that they there were suitable for any face image with the
same width and height even it has expression, as shown in fig. 2.
Figure 2: Examples of Feature extraction
(6)
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
17
This feature extraction technique can be generalized and the candidate region for each feature,
which is based on height and width of the face image to match any face image size can be as follows:
• Right eye: Rows from (height/3.3) to (height /2.1)
Columns from (width/8) to (width/2.5)
• Left eye: Rows from (height/3.3) to (height /2.1)
Columns from (width/1.7) to (width/1.17)
• Nose: Rows from (height/1.8) to (height /1.38)
Columns from (width/2.67) to (width/1.6)
• Mouth: Rows from (height/1.38) to (height /1.08)
Columns from (width/3.33) to (width/1.48)
3. Method Representation
The proposed algorithm consists of three parts. Firstly, variance estimation is applied to
extract database images, which have a close variance value to the test image. Secondly, the features
extraction method is used to extract facial features from the face images. Finally, Euclidean distance
of facial features is computed by the following equation:
[ ] [ ]( )
[ ] [ ]( )
[ ] [ ]( )
d abs test feature R matched feature R
abs test feature G matched feature G
abs test feature B matched feature B
= − +
− +
−
(7)
By applying eq. (7) to find the distance between the right eye region of the test image and the
right eye region of each image, which has variance value close to the variance value of the test image
(returned from variance test), then, by applying eq. (7) to left eye, nose and mouth regions and find
summation of these four distance values, it can be decided which of the images that have close
variance value is the most similar to test image. The steps of the proposed algorithm are shown in
fig. 3.
Step 1: Read input image.
Step 2: Read the database of images and calculate the variance of each image by using eq. (2) , (3), and
put variance values in an array,
Step 3: Calculate variance of the test image using eq. (2), (3).
Step 4: Compare variance value of test image and each image in database and keep locations of the most
similar images to test image, which satisfy the condition ( )600 variance difference 600− ≤ ≤ , in an array.
Step 5: For i=1 to number of similar images which extracted from step 4.
a) Extract facial features from each image according to location (right eye – left eye – nose –
mouth).
b) Calculate the Euclidean distance between the 3-arrays containing the RGB color values of each
feature using eq. (7) plus the Euclidean distance between the 3-arrays containing the RGB color
values of the whole test image and each similar image from step 4.
Step 6: Detect minimum distance (d ) and location of the image that has the minimum distance from step
5.
Step 7: Display the best-matched image from the database.
Figure 3: The proposed algorithm's steps
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
18
III. RESULTS AND DISCUSSION
The experiments were performed on a computer with 2.20 GHz speed and 4 Gbyte RAM
using several color images containing faces and a database of 10 different images with different
sizes, as shown in fig. 4. The test images include different images for the same person with different
conditions. Some images have expressions, glasses and some rotation, as shown in fig. 5. The
images in the database have been chosen carefully such that they are standard and have no
expressions if possible.
The proposed algorithm gives good results in recognizing all the test images, which belong to
the same person in the database, with different expressions, glasses and rotation. Even if the gaze
direction is different, the proposed algorithm succeeds in returning the correct location of the right
image in the used database.
Figure 4: The used database
Figure 5: The used database and some of test images
Table 1 shows some of the results obtained using 150 test RGB images of 10 different
persons and a database of 10 standard RGB images of those persons, which is shown in fig. 4. The
first column of the table shows the test face. The next columns show the results that were obtained by
applying the classical method, variance estimation formula, feature extraction method, the proposed
method and the time in each method. The classical method is the general method in comparing two
images, by comparing pixel by pixel and computing the summation of the difference of all pixels.
The classical method performs on the whole image without partitioning. The variance estimation is
54321
109876
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
19
applied by using eq. (2) and (3). We have displayed the results of the variance separately to show
how the variance computation is efficient and important in comparing similarity between images.
When we used the variance estimation as a first test in the proposed method, we noticed that it gives
the correct image location from the database if the test image and the matched image in the database
have similar conditions of illumination and background. Also, we have applied the feature extraction
method separately to study how it is efficient in face recognition. Facial features are extracted and
Euclidean distance is computed for each feature then the summation of the difference is obtained. By
comparing the difference between the test image and all the images in the database, the matched
image is detected as the image has minimum difference. It is noticed from the table that the classical
method and variance estimation method have less time than the two other methods. The execution of
the proposed method proceeds as follows: the first test (variance test) with variance difference range
equals [-600,600] is applied first to detect the images that have close variance values to the test
image. (It should be noted that the variance difference range is arbitrary and can be changed). The
algorithm returns the locations of faces whose variance value close to the variance of the test face. In
order to know which one of them is the same or the closest to the test face, the facial features of the
test face and the facial features of the obtained face images are extracted then the Euclidean distance
of their RGB components is calculated by eq. (7). The face image with the minimum distance (d) is
considered as the best-matched image and its location is returned.
The search efficiency is evaluated by how many times the distance (d) computations are
performed on average compared to the size of the database. In the proposed method the total number
of distance calculations is small, because it uses the variance test to find out the face images that have
a close variance value to the input face image, then the distance computation is performed only on
those images where their number is always small compared to the database size. But execution time
of the proposed method is high compared to the other methods because the proposed method works
in two stages or two tests variance estimation and facial feature extraction where each of these stages
take some time. The execution time depends on the database size. The execution time of the classical
method is 0.43 seconds on average, the execution time of the variance estimation method is 0.1
seconds on average, the execution time of facial feature extraction method is 0.22 seconds on average
and the execution time of the proposed method is 1.06 seconds on average, as shown in Fig. 6.
In most cases, the proposed algorithm gives good results. However, in some cases, the results
are not good, because the proposed algorithm is affected by illumination conditions in some images,
zooming and big rotation in some others, see Table 2.
Figure 6: The time chart of the proposed algorithm and comparison methods
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Time(inseconds)
Images
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
20
Table 1: Some results of the proposed algorithm and the comparison methods
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
21
Table 1: Some results of the proposed algorithm and the comparison methods (Continued)
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
22
Table 2: Some false positive results of the proposed algorithm and the comparison methods
IV. CONCLUSION
In this paper, a new method of face recognition, for faces with expressions, glasses and/or
rotation, based on variance estimation and facial feature extraction is proposed. It can be used in face
recognition systems such as video surveillance, human computer interfaces, image database
management and smart home applications. The proposed algorithm has been tested using a database
of faces and the results showed that it is able to recognize a variety of different faces in spite of
different expressions, rotation and illumination conditions. Zoomed images and their effect on the
recognition of humans need further investigation.
REFERENCES
[1] P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min and
W. Worek, Overview of the Face Recognition Grand Challenge, proceedings of the 2005
IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),
(2005).
[2] S. Lin, An introduction to Face Recognition technology, informing science issue on
multimedia informing technologies–part 2, 2000, 3(1).
[3] Z. Pan, G. Healey, M. Prasad and B. Tromberg, Face Recognition in Hyperspectral Images,
IEEE Transactions On Pattern Analysis And Machine Intelligence, 25(12), 2003, 1552-1560.
[4] V. Blanz and T. Vetter, Face Recognition Based on Fitting a 3D Morphable Model, IEEE
Transactions On Pattern Analysis And Machine Intelligence, 25(9), 2003.
[5] H. Gu, G. Su and C. Du, Feature Points Extraction from Faces, Image and Vision Computing
NZ, Palmerston North, 2003, 154-158.
[6] S. Phimoltares, C. Lursinsap and K. Chamnongthai, Face detection and facial feature
localization without considering the appearance of image context, Image and Vision
Computing, 25, 2007, 741-753.
[7] Α. Nikolaidis and I. Pitas, Facial Feature Extraction and Pose Determination, Pattern
Recognition, Elsevier, 33(11), 2000, 1783-1791.
[8] H. Koo and H. Song, Facial Feature Extraction for Face Modeling Program, International
Journal Of Circuits, Systems And Signal Processing, 4(4), 2010, 169-176.
[9] G. Yen and N. Nithianandan, Facial Feature Extraction Using Genetic Algorithm,
Proceedings of the Evolutionary Computation, 2, IEEE 2002, 1895-1900.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME
23
[10] S. Srivastava, Real Time Facial Expression Recognition Using A Novel Method, The
International Journal of Multimedia & Its Applications (IJMA), 4(2), 2012, 49-57.
[11] V. Radha and N. Nallammal, Comparative Analysis of Curvelets Based Face Recognition
Methods, Proceedings of the World Congress on Engineering and Computer Science, 1,
2011.
[12] K. Kumar, S. Prasad, V. Semwal and R. Tripathi, Real Time Face Recognition Using
Adaboost Improved Fast PCA Algorithm, International Journal of Artificial Intelligence &
Applications (IJAIA), 2(3), 2011.
[13] M. El-Bashir, Face Recognition Using Multi-Classifier, Applied Mathematical Sciences,
6(45), 2012, 2235 - 2244.
[14] A. Javed, Face Recognition Based on Principal Component Analysis, International Journal
of Image, Graphics and S ignal Processing, 2, 2013, 38-44.
[15] P. Pattanasethanon and C. Savithi, Human Face Detection and Recognition using Web-Cam,
Journal of Computer Science, 8(9), 2012, 1585-1593.
[16] C. Wu, C. Liu, H. Shum, Y. Xu and Z. Zhang, Automatic Eyeglasses Removal from Face
Images, ACCV2002: The 5th Asian Conference on Computer Vision, 2002.
[17] W. Chen and Y. Gao, Recognizing Face profiles in the Presence of Hairs/glasses
Interferences, 11th. International Conference Control, Automation, Robotics and Vision
(ICARCV), 2010, 1854-1859.
[18] W. Mohamed, M. Heshmat, M. Girgis and S. Elaw, A new Method for Face Recognition
Using Variance Estimation and Feature Extraction, International Journal of Emerging Trends
and Technology in Computer Science (IJETTCS), 2(2), 2013, 134-141.
[19] A. Ford and A. Roberts, Colour Space Conversions, Technical Report,
http://www.poynton.com/PDFs/coloureq.pdf, 1998.
[20] C. Yang and S. Kwok, Efficient Gamut Clipping for Colour Image, Processing using LHS
and YIQ", Optical Engineering Journal, 42(3), 2003, 701–711.
[21] U.K.Jaliya and J.M.Rathod, “A Survey on Human Face Recognition Invariant to
Illumination”, International journal of Computer Engineering & Technology (IJCET),
Volume 4, Issue 2, 2013, pp. 517 - 525, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.
[22] Abhishek Choubey and Girish D. Bonde, “Face Recognition Across Pose With Estimation of
Pose Parameters”, International Journal of Electronics and Communication Engineering
&Technology (IJECET), Volume 3, Issue 1, 2012, pp. 311 - 316, ISSN Print: 0976- 6464,
ISSN Online: 0976 –6472.
[23] J. V. Gorabal and Manjaiah D. H., “Texture Analysis for Face Recognition”, International
Journal of Graphics and Multimedia (IJGM), Volume 4, Issue 2, 2013, pp. 20 - 30,
ISSN Print: 0976 – 6448, ISSN Online: 0976 –6456.
[24] S. K. Hese and M. R. Banwaskar, “Appearance Based Face Recognition by PCA and LDA”,
International journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 2,
2013, pp. 48 - 57, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.

Weitere ähnliche Inhalte

Was ist angesagt?

Multi Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationMulti Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
CSCJournals
 
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
CSCJournals
 
Report Face Detection
Report Face DetectionReport Face Detection
Report Face Detection
Jugal Patel
 

Was ist angesagt? (20)

PAPER2
PAPER2PAPER2
PAPER2
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
 
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET-  	  Persons Identification Tool for Visually Impaired - Digital EyeIRJET-  	  Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
 
face recognition
face recognitionface recognition
face recognition
 
A study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classification
 
Face Detection Using Modified Viola Jones Algorithm
Face Detection Using Modified Viola Jones AlgorithmFace Detection Using Modified Viola Jones Algorithm
Face Detection Using Modified Viola Jones Algorithm
 
ICIC-2015
ICIC-2015ICIC-2015
ICIC-2015
 
A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...
A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...
A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...
 
Facial expression recognition based on image feature
Facial expression recognition based on image featureFacial expression recognition based on image feature
Facial expression recognition based on image feature
 
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationMulti Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
 
Face Recognition using C#
Face Recognition using C#Face Recognition using C#
Face Recognition using C#
 
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
 
Report Face Detection
Report Face DetectionReport Face Detection
Report Face Detection
 
Face Recognition Research Report
Face Recognition Research ReportFace Recognition Research Report
Face Recognition Research Report
 
PERFORMANCE EVALUATION AND IMPLEMENTATION OF FACIAL EXPRESSION AND EMOTION R...
PERFORMANCE EVALUATION AND IMPLEMENTATION  OF FACIAL EXPRESSION AND EMOTION R...PERFORMANCE EVALUATION AND IMPLEMENTATION  OF FACIAL EXPRESSION AND EMOTION R...
PERFORMANCE EVALUATION AND IMPLEMENTATION OF FACIAL EXPRESSION AND EMOTION R...
 
Face Detection
Face DetectionFace Detection
Face Detection
 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
 
20320130406016
2032013040601620320130406016
20320130406016
 
EFFECT OF FACE TAMPERING ON FACE RECOGNITION
EFFECT OF FACE TAMPERING ON FACE RECOGNITIONEFFECT OF FACE TAMPERING ON FACE RECOGNITION
EFFECT OF FACE TAMPERING ON FACE RECOGNITION
 
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKHUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
 

Andere mochten auch

FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
IAEME Publication
 
ODD EVEN BASED BINARY SEARCH
ODD EVEN BASED BINARY SEARCHODD EVEN BASED BINARY SEARCH
ODD EVEN BASED BINARY SEARCH
IAEME Publication
 

Andere mochten auch (10)

50120140503017 2
50120140503017 250120140503017 2
50120140503017 2
 
Study on Automatic Age Estimation and Restoration for Verification of Human F...
Study on Automatic Age Estimation and Restoration for Verification of Human F...Study on Automatic Age Estimation and Restoration for Verification of Human F...
Study on Automatic Age Estimation and Restoration for Verification of Human F...
 
SRF THEORY BASED STATCOM FOR COMPENSATION OF REACTIVE POWER AND HARMONICS
SRF THEORY BASED STATCOM FOR COMPENSATION OF REACTIVE POWER AND HARMONICSSRF THEORY BASED STATCOM FOR COMPENSATION OF REACTIVE POWER AND HARMONICS
SRF THEORY BASED STATCOM FOR COMPENSATION OF REACTIVE POWER AND HARMONICS
 
NON-ISOLATED SOFT SWITCHING DC-DC CONVERTER AND LOAD AT FULL RANGE OF ZVS
NON-ISOLATED SOFT SWITCHING DC-DC CONVERTER AND LOAD AT FULL RANGE OF ZVS NON-ISOLATED SOFT SWITCHING DC-DC CONVERTER AND LOAD AT FULL RANGE OF ZVS
NON-ISOLATED SOFT SWITCHING DC-DC CONVERTER AND LOAD AT FULL RANGE OF ZVS
 
FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
FIRING ANGLE SVC MODEL FOR ANALYZING THE PERFORMANCE OF TRANSMISSION NETWORK ...
 
MICROCONTROLLER BASED SOLAR POWER INVERTER
MICROCONTROLLER BASED SOLAR POWER INVERTERMICROCONTROLLER BASED SOLAR POWER INVERTER
MICROCONTROLLER BASED SOLAR POWER INVERTER
 
DESIGN CONTROL SYSTEM OF AN AIRCRAFT
DESIGN CONTROL SYSTEM OF AN AIRCRAFTDESIGN CONTROL SYSTEM OF AN AIRCRAFT
DESIGN CONTROL SYSTEM OF AN AIRCRAFT
 
REDUCTION OF HARMONIC DISTORTION IN BLDC DRIVE USING BL-BUCK BOOST CONVERTER ...
REDUCTION OF HARMONIC DISTORTION IN BLDC DRIVE USING BL-BUCK BOOST CONVERTER ...REDUCTION OF HARMONIC DISTORTION IN BLDC DRIVE USING BL-BUCK BOOST CONVERTER ...
REDUCTION OF HARMONIC DISTORTION IN BLDC DRIVE USING BL-BUCK BOOST CONVERTER ...
 
ODD EVEN BASED BINARY SEARCH
ODD EVEN BASED BINARY SEARCHODD EVEN BASED BINARY SEARCH
ODD EVEN BASED BINARY SEARCH
 
Ijmet 07 06_005
Ijmet 07 06_005Ijmet 07 06_005
Ijmet 07 06_005
 

Ähnlich wie 50120140504002

Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
ijbuiiir1
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356
Editor IJARCET
 

Ähnlich wie 50120140504002 (20)

IRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection System
 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenface
 
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
 
Ijetcas14 435
Ijetcas14 435Ijetcas14 435
Ijetcas14 435
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithms
 
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
 
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
 
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
 
C017431730
C017431730C017431730
C017431730
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
Kh3418561861
Kh3418561861Kh3418561861
Kh3418561861
 
184
184184
184
 
Face and Eye Detection Varying Scenarios With Haar Classifier_2015
Face and Eye Detection Varying Scenarios With Haar Classifier_2015Face and Eye Detection Varying Scenarios With Haar Classifier_2015
Face and Eye Detection Varying Scenarios With Haar Classifier_2015
 
DETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESDETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGES
 
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSISFACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
 
IRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face Recognition
 

Mehr von IAEME Publication

A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
IAEME Publication
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
IAEME Publication
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
IAEME Publication
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
IAEME Publication
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
IAEME Publication
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
IAEME Publication
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
IAEME Publication
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
IAEME Publication
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
IAEME Publication
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
IAEME Publication
 

Mehr von IAEME Publication (20)

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdf
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
 

Kürzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Kürzlich hochgeladen (20)

Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

50120140504002

  • 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 11 A NEW FACE RECOGNITION SCHEME FOR FACES WITH EXPRESSIONS, GLASSES AND ROTATION Walaa M Abdel-Hafiez1 , Mohamed Heshmat2 , Moheb Girgis3 , Seham Elaw4 1, 2, 4 Faculty of Science, Mathematical and Computer Science Department, Sohag University, 82524, Sohag, Egypt 3 Faculty of Science, Department of Computer Science, Minia University, El-Minia, Egypt ABSTRACT Face recognition is considered as one of the hottest research areas in computer vision field. The purpose of the proposed research work is to develop an algorithm that can recognize a person by comparing the characteristics of his/her face, which may have expressions, glasses and/or rotation, to those of known faces in a database. This work provides a simple and efficient technique to recognize human faces. The new method is based on variance estimation of the three components of the color faces images and facial features extraction of the most facial features. The features under consideration are eyes, nose and mouth. The technique used to extract facial features was developed based on feature location with respect to face dimensions. The proposed algorithm has been tested on various face images and its performance was found to be good in most cases. Experimental results show that our method of human face recognition achieves very encouraging results with good accuracy, great speed and simple computations. Keywords: Face Recognition, Facial Features Extraction, Color Spaces, Variance Estimation. I. INTRODUCTION Face recognition has been used in various applications where personal identification is required like, visual attendance systems where student identification and recognition are achieved through face recognition. Face recognition [1-4] has been used also in gaming applications, security systems, credit-card verification, criminal identifications, and teleconference, and short face recognition applications are widely used in many corporate and educational institutions. INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME: www.iaeme.com/ijcet.asp Journal Impact Factor (2014): 8.5328 (Calculated by GISI) www.jifactor.com IJCET © I A E M E
  • 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 12 Human faces are complex objects with features that can vary over time. However, humans have a natural ability to recognize faces and identify persons with just a glance. Our natural recognition ability also extends beyond face recognition, where we are equally able to quickly recognizing patterns, sounds or smells. Unfortunately, this ability does not exist in machines, thus the need to simulate recognition artificially in our attempts to create intelligent autonomous machines. Facial feature recognition is an example of popular applications for artificial intelligence systems. Face recognition by machines has various important applications in real life, such as, electronic and physical access control, national defense and international security. Simulating our face recognition natural ability in machines is difficult but not impossible. Throughout our lifetime, many faces are seen and stored naturally in our memories which forming a kind of database. Machine recognition of faces requires also a database, which consists of facial images that may include different face images of the same person. The development of intelligent face recognition systems requires providing sufficient information and meaningful data during machine learning of a face. Face Recognition can be defined as the visual perception of familiar faces or the biometric identification by scanning a person's face and matching it against a database of known faces. In both definitions, the faces can be identified as familiar or known faces. One of the main challenging problems in building an automated system that perform face recognition and verification tasks is face detection and facial feature extraction. Though people are good at face identification, recognizing the human faces automatically by computer is a very difficult task. Face recognition is influenced by many complications, such as the differences of facial expression, the light directions of imaging, and the variety of posture, size and angle. Even for the same person, the images taken in a different surrounding condition may be unlike. The problem is so complicated that the achievement in the field of automatic face recognition by computer is not as satisfied as the finger prints [5]. The objective of facial feature localization is to detect the presence and location of features after the locations of faces are extracted by using any face detection method. The challenges associated with face and facial feature detection methods can be attributed to the following factors [6]: • Intensity: There are three types of intensity: color, gray, and binary. • Pose: Face images vary due to the relative camera-face pose (frontal, 45º, profile), and some facial features such as an eye may become partially or wholly occluded. • Structural components: Facial features such as beards, mustaches, and glasses may or may not be present. • Image rotation: Face images directly vary for different rotations. • Poor quality: Image intensity in poor-quality images, for instance, blurry images, distorted images, and images with noise, becomes unusual. • Facial expression: The appearance of faces depends on a personal facial expression. • Unnatural intensity: Cartoon faces and rendered faces from 3D model have unnatural intensity. • Occlusion: Faces may be partially occluded by other objects such as hand, scarf, etc. • Illumination: Face images vary due to the position of light source [6]. Phimoltares et al. [6] presented algorithms for all types of face images in the presence of several image conditions. There are two main stages in their method. In the first stage, the faces are detected from an original image by using Canny edge detection and their proposed average face templates. Second, a proposed neural visual model (NVM) is used to recognize all possibilities of facial feature positions. Input parameters are obtained from the positions of facial features and the face characteristics that are low sensitive to intensity change. Finally, to improve the results, image dilation is applied for removing some irrelevant regions.
  • 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 13 Nikolaidis and Pitas [7] proposed a combined approach for facial feature extraction and determination of gaze at direction that employs some improved variations of the adaptive hough transforms for curve detection, minima analysis of feature candidates, template matching for inner facial feature localization, active contour models for inner face contour detection and projective geometry properties for accurate pose determination. Koo and Song [8] suggested defining 20 facial features. Their method detects the facial candidate regions by haar classifier, and detects eye candidate region and extracts eye features by dilate operation, then detect lip candidate region using the features. The relative color difference of a* in the L*a*b* color space was used to extract lip feature and to detect nose candidate region and detected 20 features from 2D image by analyzing end of nose. Yen and Nithianandan [9] presented an automatic facial feature extraction method based on the edge density distribution of the image. In the preprocessing stage, a face is approximated to an ellipse, and a genetic algorithm is applied to search for the best ellipse region match. In the feature extraction stage, a genetic algorithm is applied to extract the facial features, such as the eyes, nose and mouth, in the predefined sub regions. Gu et al. [5] proposed a method to extract the feature points from faces automatically. It provided a feasible way to locate the positions of the two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. Srivastava [10] proposed an efficient algorithm for facial expression recognition system, which performs facial expression analysis in a near real time from a live web cam feed. The system is composed of two different entities: trainer and evaluator. Each frame of video feed is passed through a series of steps, including Haar classifiers, skin detection, feature extraction, feature point tracking, creating a learned support vector machine model to classify emotions to achieve a tradeoff between accuracy and result rate. Radha and Nallammal [11] described a comparative analysis of face recognition methods: principle component analysis (PCA), linear discriminant analysis (LDA) and independent component analysis (ICA) based on curvelet transform. The algorithms are tested on ORL Database. Kumar et al. [12] presented an automated system for human face recognition in a real time background world for a large homemade dataset of persons' faces. To detect real time human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA are used to recognize the faces detected. The matched face is then used to mark attendance in the laboratory, in their case. El-Bashir [13] introduced a method for face recognition. After a preprocessing and normalization stage to the image, PCA is applied to recognize a specified face. If the face is not recognized correctly, then more features are extracted: face color and moment invariant. The face is recognized again using decision tree. Javed [14] proposed a computer system that can recognize a person by comparing the characteristics of face to those of known individuals. He focused on frontal two-dimensional images that have been taken in a controlled environment, i.e. the illumination and the background were constant, and used the PCA technique. The system gives good results especially with angled face views. Pattanasethanon and Savithi [15] presented a novel technique for facial recognition through the implementation of successes mean quantization transform and spare network of winnow with the assistance of eigenface computation. After having limited the frame of the input image or images from web-cam, the image has cropped into an oval or ellipse shape. Then the image is transformed into grey scale color and is normalized in order to reduce color complexities. They also focused on the special characteristics of human facial aspects such as nostril areas and oral areas, compared the images obtained by web-cam with images in database, and have good accuracy with low time.
  • 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 14 Wu et al. [16] presented a system that can automatically remove eyeglasses from an input face image. The system consists of three modules: eyeglasses recognition, localization and removal. Given a face image, first, an eyeglasses classifier is used to determine if a pair of eyeglasses is present. Then, a Morkov chain Monte Carlo method is applied to locate the glasses by searching for the global optimum of the posteriori. Finally, a novel example based approach has been developed to synthesize an image with eyeglasses removed from the detected and localized face image. The experiments demonstrated that their approach produces good quality of face images with eyeglasses removed. Chen and Gao [17] presented a local attributed string matching (LAStrM) approach to recognize face profiles in the presence of interferences. The conventional profile recognition algorithms heavily depend on the accuracy of the facial area cropping. However, in realistic scenarios the facial area may be difficult to localize due to interferences (e.g., glasses, hairstyles). The proposed approach is able to efficiently find the most discriminative local parts between face profiles addressing the recognition problem with interferences. Experimental results have shown that the proposed matching scheme is robust to interferences compared against several primary approaches using two profile image databases (Bern and FERET). This paper presents a new face recognition method for faces with expressions, glasses and/or rotation. The proposed method uses variance estimation of RGB components to compare the extracted faces and the faces in the database used in comparison. In addition, Euclidean distance of facial features of the extracted faces from test image and faces extracted from the database after a variance test is used. The rest of this paper is organized as follows. Section II describes the methodology of the proposed method with its stages: variance estimation, feature extraction, method representation and the proposed algorithm. Section III presents the results and method analysis. Section IV draws the conclusion of this work and possible points for future work. II. METHODOLOGY The face and facial feature detection algorithms are applied to detect generic faces from several face images. Most automatic face recognition approaches are based on frontal images. Facial profiles, on the other hand, provide complementary information of the face that is not present in frontal faces. Fusion of frontal and profile views makes the overall personal identification technique foolproof and efficient. The proposed face recognition method is based on the average variance estimation of the three components of RGB faces images, and the extraction of the most facial features. The features under consideration are eyes, nose and mouth. The technique used to extract facial features is based on feature location with respect to the dimensions of the face image. Given a face image, which obtained from a camera or preprocessed previously, our goal is to identify this face image using a database of known humans' faces. Therefore, our algorithm is divided into three main steps. First: variance estimation of faces images. Second: facial feature extraction, an effective method to extract facial features like eyes, nose and mouth depending on their locations with respect to the face region is used, which we have developed before in [18]. Third: similar face identification or image searching; the goal of this step is to scan the database of known faces to find the most similar faces to the test face.
  • 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 15 1. Variance Estimation Variance calculation is a very light calculation and considered as an important constraint to prove similarity between two images. Let x be a vector of dimension n, the variance of x can be calculated as follows: n xix n i ∑ − = =1 )( var 2 , (1) where x is the mean value of x . However, it is not necessary that the two images which have the same variance to have the same contents. Different images may have the same value of variance because variance estimation is totally depending on the values of image pixels and their mean value. So the variance is used at first to filter the database of faces and extract faces that have the same or close value to variance of the input face image, then another test is required to detect the most similar faces to this test face [18]. When working with RGB color images, there are three values for each pixel in the image, representing the red, green, and blue components. To compute the variance of RGB image, the variance for each color is calculated separately. So there are three values for variance, one for the red values, another for the green values and third for the blue values [18], which are calculated as follows: 2 2 2 1 1 1 ( ) ( ) ( ) , , , n n n r r g g b b i i i red green blue x x x x x x v v v n n n = = = − − − = = = ∑ ∑ ∑ (2) To simplify the comparison, the average of the three values is computed as follows: ( ) , 3 red green bluev v v v + + = (3) 2. Facial Features Extraction In this part of work, the aim is to compare two color faces to decide whether they both belong to the same person or not and detect the similarity between them using Euclidean distance. RGB (Red, Green and Blue) color space, fig. 1, which is used here, is an additive color system based on tri-chromatic theory. It is often found in systems that use a CRT to display images. The RGB color system is very common, and is being used in virtually every computer system as well as television, video etc [19], [20]. Red Green Blue Figure 1: RGB color model
  • 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 16 In RGB color model, any source color (F) can be matched by a linear combination of three color primaries, i.e. Red, Green and Blue, provided that none of those three can be matched by a combination of the other two, see fig. 1. Here, F can be represented as: ,F r R gG bB= + + (4) where r, g and b are scalars indicating how much of each of the three primaries (R, G and B) are contained in F. The normalized form of F can be as follows: ' ' ' ,F R R G G B B= + + (5) where ' / ( ) , ' / ( ) , (6) ' / ( ) , R r r g b G g r g b B b r g b = + + = + + = + + To extract facial features, we used our method proposed in [18], which is based on feature location with respect to the whole face region. By detecting the candidate regions of left eye, right eye, nose and mouth, by training, then applying the obtained dimensions of each region on several other faces with the same size, the results were very good, as shown in fig. 2. Given a face image of 200 pixels height and 200 pixel width, after training with a lot of images, we found that the candidate region of eyes is located between rows 60 and 95, columns 25 and 80 for right eye and columns 115 and 170 for left eye. The candidate region for the nose is located between rows 110 and 145 and columns 75 and 125 and the candidate region for the mouth is located between rows 145 and 185 and columns 60 and 135. When applying the dimensions obtained by training on many face images, we found that they there were suitable for any face image with the same width and height even it has expression, as shown in fig. 2. Figure 2: Examples of Feature extraction (6)
  • 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 17 This feature extraction technique can be generalized and the candidate region for each feature, which is based on height and width of the face image to match any face image size can be as follows: • Right eye: Rows from (height/3.3) to (height /2.1) Columns from (width/8) to (width/2.5) • Left eye: Rows from (height/3.3) to (height /2.1) Columns from (width/1.7) to (width/1.17) • Nose: Rows from (height/1.8) to (height /1.38) Columns from (width/2.67) to (width/1.6) • Mouth: Rows from (height/1.38) to (height /1.08) Columns from (width/3.33) to (width/1.48) 3. Method Representation The proposed algorithm consists of three parts. Firstly, variance estimation is applied to extract database images, which have a close variance value to the test image. Secondly, the features extraction method is used to extract facial features from the face images. Finally, Euclidean distance of facial features is computed by the following equation: [ ] [ ]( ) [ ] [ ]( ) [ ] [ ]( ) d abs test feature R matched feature R abs test feature G matched feature G abs test feature B matched feature B = − + − + − (7) By applying eq. (7) to find the distance between the right eye region of the test image and the right eye region of each image, which has variance value close to the variance value of the test image (returned from variance test), then, by applying eq. (7) to left eye, nose and mouth regions and find summation of these four distance values, it can be decided which of the images that have close variance value is the most similar to test image. The steps of the proposed algorithm are shown in fig. 3. Step 1: Read input image. Step 2: Read the database of images and calculate the variance of each image by using eq. (2) , (3), and put variance values in an array, Step 3: Calculate variance of the test image using eq. (2), (3). Step 4: Compare variance value of test image and each image in database and keep locations of the most similar images to test image, which satisfy the condition ( )600 variance difference 600− ≤ ≤ , in an array. Step 5: For i=1 to number of similar images which extracted from step 4. a) Extract facial features from each image according to location (right eye – left eye – nose – mouth). b) Calculate the Euclidean distance between the 3-arrays containing the RGB color values of each feature using eq. (7) plus the Euclidean distance between the 3-arrays containing the RGB color values of the whole test image and each similar image from step 4. Step 6: Detect minimum distance (d ) and location of the image that has the minimum distance from step 5. Step 7: Display the best-matched image from the database. Figure 3: The proposed algorithm's steps
  • 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 18 III. RESULTS AND DISCUSSION The experiments were performed on a computer with 2.20 GHz speed and 4 Gbyte RAM using several color images containing faces and a database of 10 different images with different sizes, as shown in fig. 4. The test images include different images for the same person with different conditions. Some images have expressions, glasses and some rotation, as shown in fig. 5. The images in the database have been chosen carefully such that they are standard and have no expressions if possible. The proposed algorithm gives good results in recognizing all the test images, which belong to the same person in the database, with different expressions, glasses and rotation. Even if the gaze direction is different, the proposed algorithm succeeds in returning the correct location of the right image in the used database. Figure 4: The used database Figure 5: The used database and some of test images Table 1 shows some of the results obtained using 150 test RGB images of 10 different persons and a database of 10 standard RGB images of those persons, which is shown in fig. 4. The first column of the table shows the test face. The next columns show the results that were obtained by applying the classical method, variance estimation formula, feature extraction method, the proposed method and the time in each method. The classical method is the general method in comparing two images, by comparing pixel by pixel and computing the summation of the difference of all pixels. The classical method performs on the whole image without partitioning. The variance estimation is 54321 109876
  • 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 19 applied by using eq. (2) and (3). We have displayed the results of the variance separately to show how the variance computation is efficient and important in comparing similarity between images. When we used the variance estimation as a first test in the proposed method, we noticed that it gives the correct image location from the database if the test image and the matched image in the database have similar conditions of illumination and background. Also, we have applied the feature extraction method separately to study how it is efficient in face recognition. Facial features are extracted and Euclidean distance is computed for each feature then the summation of the difference is obtained. By comparing the difference between the test image and all the images in the database, the matched image is detected as the image has minimum difference. It is noticed from the table that the classical method and variance estimation method have less time than the two other methods. The execution of the proposed method proceeds as follows: the first test (variance test) with variance difference range equals [-600,600] is applied first to detect the images that have close variance values to the test image. (It should be noted that the variance difference range is arbitrary and can be changed). The algorithm returns the locations of faces whose variance value close to the variance of the test face. In order to know which one of them is the same or the closest to the test face, the facial features of the test face and the facial features of the obtained face images are extracted then the Euclidean distance of their RGB components is calculated by eq. (7). The face image with the minimum distance (d) is considered as the best-matched image and its location is returned. The search efficiency is evaluated by how many times the distance (d) computations are performed on average compared to the size of the database. In the proposed method the total number of distance calculations is small, because it uses the variance test to find out the face images that have a close variance value to the input face image, then the distance computation is performed only on those images where their number is always small compared to the database size. But execution time of the proposed method is high compared to the other methods because the proposed method works in two stages or two tests variance estimation and facial feature extraction where each of these stages take some time. The execution time depends on the database size. The execution time of the classical method is 0.43 seconds on average, the execution time of the variance estimation method is 0.1 seconds on average, the execution time of facial feature extraction method is 0.22 seconds on average and the execution time of the proposed method is 1.06 seconds on average, as shown in Fig. 6. In most cases, the proposed algorithm gives good results. However, in some cases, the results are not good, because the proposed algorithm is affected by illumination conditions in some images, zooming and big rotation in some others, see Table 2. Figure 6: The time chart of the proposed algorithm and comparison methods 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Time(inseconds) Images
  • 10. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 20 Table 1: Some results of the proposed algorithm and the comparison methods
  • 11. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 21 Table 1: Some results of the proposed algorithm and the comparison methods (Continued)
  • 12. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 22 Table 2: Some false positive results of the proposed algorithm and the comparison methods IV. CONCLUSION In this paper, a new method of face recognition, for faces with expressions, glasses and/or rotation, based on variance estimation and facial feature extraction is proposed. It can be used in face recognition systems such as video surveillance, human computer interfaces, image database management and smart home applications. The proposed algorithm has been tested using a database of faces and the results showed that it is able to recognize a variety of different faces in spite of different expressions, rotation and illumination conditions. Zoomed images and their effect on the recognition of humans need further investigation. REFERENCES [1] P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min and W. Worek, Overview of the Face Recognition Grand Challenge, proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), (2005). [2] S. Lin, An introduction to Face Recognition technology, informing science issue on multimedia informing technologies–part 2, 2000, 3(1). [3] Z. Pan, G. Healey, M. Prasad and B. Tromberg, Face Recognition in Hyperspectral Images, IEEE Transactions On Pattern Analysis And Machine Intelligence, 25(12), 2003, 1552-1560. [4] V. Blanz and T. Vetter, Face Recognition Based on Fitting a 3D Morphable Model, IEEE Transactions On Pattern Analysis And Machine Intelligence, 25(9), 2003. [5] H. Gu, G. Su and C. Du, Feature Points Extraction from Faces, Image and Vision Computing NZ, Palmerston North, 2003, 154-158. [6] S. Phimoltares, C. Lursinsap and K. Chamnongthai, Face detection and facial feature localization without considering the appearance of image context, Image and Vision Computing, 25, 2007, 741-753. [7] Α. Nikolaidis and I. Pitas, Facial Feature Extraction and Pose Determination, Pattern Recognition, Elsevier, 33(11), 2000, 1783-1791. [8] H. Koo and H. Song, Facial Feature Extraction for Face Modeling Program, International Journal Of Circuits, Systems And Signal Processing, 4(4), 2010, 169-176. [9] G. Yen and N. Nithianandan, Facial Feature Extraction Using Genetic Algorithm, Proceedings of the Evolutionary Computation, 2, IEEE 2002, 1895-1900.
  • 13. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 4, April (2014), pp. 11-23 © IAEME 23 [10] S. Srivastava, Real Time Facial Expression Recognition Using A Novel Method, The International Journal of Multimedia & Its Applications (IJMA), 4(2), 2012, 49-57. [11] V. Radha and N. Nallammal, Comparative Analysis of Curvelets Based Face Recognition Methods, Proceedings of the World Congress on Engineering and Computer Science, 1, 2011. [12] K. Kumar, S. Prasad, V. Semwal and R. Tripathi, Real Time Face Recognition Using Adaboost Improved Fast PCA Algorithm, International Journal of Artificial Intelligence & Applications (IJAIA), 2(3), 2011. [13] M. El-Bashir, Face Recognition Using Multi-Classifier, Applied Mathematical Sciences, 6(45), 2012, 2235 - 2244. [14] A. Javed, Face Recognition Based on Principal Component Analysis, International Journal of Image, Graphics and S ignal Processing, 2, 2013, 38-44. [15] P. Pattanasethanon and C. Savithi, Human Face Detection and Recognition using Web-Cam, Journal of Computer Science, 8(9), 2012, 1585-1593. [16] C. Wu, C. Liu, H. Shum, Y. Xu and Z. Zhang, Automatic Eyeglasses Removal from Face Images, ACCV2002: The 5th Asian Conference on Computer Vision, 2002. [17] W. Chen and Y. Gao, Recognizing Face profiles in the Presence of Hairs/glasses Interferences, 11th. International Conference Control, Automation, Robotics and Vision (ICARCV), 2010, 1854-1859. [18] W. Mohamed, M. Heshmat, M. Girgis and S. Elaw, A new Method for Face Recognition Using Variance Estimation and Feature Extraction, International Journal of Emerging Trends and Technology in Computer Science (IJETTCS), 2(2), 2013, 134-141. [19] A. Ford and A. Roberts, Colour Space Conversions, Technical Report, http://www.poynton.com/PDFs/coloureq.pdf, 1998. [20] C. Yang and S. Kwok, Efficient Gamut Clipping for Colour Image, Processing using LHS and YIQ", Optical Engineering Journal, 42(3), 2003, 701–711. [21] U.K.Jaliya and J.M.Rathod, “A Survey on Human Face Recognition Invariant to Illumination”, International journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 2, 2013, pp. 517 - 525, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. [22] Abhishek Choubey and Girish D. Bonde, “Face Recognition Across Pose With Estimation of Pose Parameters”, International Journal of Electronics and Communication Engineering &Technology (IJECET), Volume 3, Issue 1, 2012, pp. 311 - 316, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472. [23] J. V. Gorabal and Manjaiah D. H., “Texture Analysis for Face Recognition”, International Journal of Graphics and Multimedia (IJGM), Volume 4, Issue 2, 2013, pp. 20 - 30, ISSN Print: 0976 – 6448, ISSN Online: 0976 –6456. [24] S. K. Hese and M. R. Banwaskar, “Appearance Based Face Recognition by PCA and LDA”, International journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 2, 2013, pp. 48 - 57, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.