SlideShare ist ein Scribd-Unternehmen logo
1 von 7
Downloaden Sie, um offline zu lesen
International Journal of Advances in Applied Sciences (IJAAS)
Vol. 9, No. 4, December 2020, pp. 326~332
ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i4.pp326-332  326
Journal homepage: http://ijaas.iaescore.com
Powerful processing to three-dimensional facial recognition
using triple information
Mohammad Karimi Moridani1
, Ahad Karimi Moridani2
, Mahin Gholipour3
1,3
Department of Biomedical Engineering, Faculty of Health, Tehran Medical Sciences, Islamic Azad University, Iran
2
Technologies de L'information Collège CDI, Canada
Article Info ABSTRACT
Article history:
Received Apr 2, 2020
Revised Jun 11, 2020
Accepted Jul 9, 2020
Face Detection plays a crucial role in identifying individuals and criminals in
Security, surveillance, and footwork control systems. Face Recognition in
the human is superb, and pictures can be easily identified even after years of
separation. These abilities also apply to changes in a facial expression such
as age, glasses, beard, or little change in the face. This method is based on
150 three-dimensional images using the Bosphorus database of a high range
laser scanner in a Bogaziçi University in Turkey. This paper presents
powerful processing for face recognition based on a combination of
the salient information and features of the face, such as eyes and nose,
for the detection of three-dimensional figures identified through analysis of
surface curvature. The Trinity of the nose and two eyes were selected for
applying principal component analysis algorithm and support vector machine
to revealing and classification the difference between face and non-face.
The results with different facial expressions and extracted from different
angles have indicated the efficiency of our powerful processing.
Keywords:
3D image
Facial information
Information processing,
Mean and gaussian curvature
SVM
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mohammad Karimi Moridani,
Department of Biomedical Engineering, Faculty of Health, Tehran Medical Sciences,
Islamic Azad University,
Tehran Province, Tehran, Danesh Rd, Iran.
Email: karimi.m@iautmu.ac.ir
1. INTRODUCTION
Face detection has great importance for many as groups. The crucial applications of Face Detection
in areas such as human-computer interface, human authentication, criminals face detection, face monitoring
systems is undeniable. In all methods proposed to determine the position of specific points in three-
dimensional images, the nose is regarded as a typical point, and the rest of the points can be located.
To determine the location of the nose, several methods were proposed. In some methods, the nearest position
of the camera is regarded as the nose, and then the other points are extracted based on the nose position.
This assumption is true in the case of no rotation around the x and y-axis; in fact, the image must be on
the front side.
Xu et al. [1] introduced a feature extraction hierarchical plan to identify the positions of the nose tip
and nose ridge.
They presented an effective energy idea to show the local distribution of neighboring points and
detect the candidate nose tips.
In the end, a support vector machine (SVM) classifier was used to choose the right nose tips.
Dibeklio˘glu et al. [2, 3] displayed strategies for recognizing facial information on 3D facial datasets
to empower posture amendment under important posture varieties.
Int J Adv Appl Sci ISSN: 2252-8814 
Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani)
327
They presented a statistical strategy to distinguish facial elements using the gradient of the depth
map based on preparing a model of local elements. The technique was tried against the FRGC v1 and
the Bosphorus databases; however, information with posture varieties was not taken into the investigation.
They likewise presented a nose tip localization and segmentation technique utilizing curvature-
based heuristic investigation. Also, even though the Bosphorus database utilized comprises of 3,396 facial
scans, they are acquired from 81 subjects. At last, no correct localization distance error results
were displayed.
In other research, Romero-Huertas et al. [4] introduced a graph matching method to determine
the places of the nose tip and internal eye corners. They presented the distance to local plane idea to portray
the local distribution of neighboring points and identify convex and concave regions of the face. At long last,
after the graph matching method has wiped out false landmarks, according to the minimum Mahalanobis
distance, the best combination of landmarks points is chosen to the trained landmark diagram model.
The technique was tried against FRGC v1 and FRGC v2 databases, including 509 scans and 3271 scans,
respectively. They detailed a considerable rate of 90% with thresholds for the nose tip at 15 mm, and for
the internal eye corners at 12 mm.
At last, Perakis et al. [5, 6] displayed techniques for distinguishing important facial points such as
eye inner and outer corners, mouth corners, and nose and chin tips) based on 2.5D scans. Neighborhood
shape and curvature processing using shape index, expulsion maps, and turn pictures were utilized to find
candidate landmark points. These are determined and marked by coordinating them with a statistical facial
landmark model [7, 8].
The goal of this paper is to find out an efficient face registration method based on triple.Then we
will introduce the problems related to the existing systems in face recognition and possible research ways that
help to solve these issues. Three dimensional methods could provide better robustness to create diversity than
2D-based methods. Automatic face recognition detection has become one of the most important research
topics in image processing due to its application in human-robot interactions, pain detection, lie detection,
and other psychological and medical applications.
2. MATERIAL AND METHOD
In this method, it is assumed that a true display of a three-dimensional image as a piece of the image
for each location (j, i) coordinates (X, Y, Z) the three-dimensional scene. Some of the images received from
the device illustrate the data on a form of polygon models, usually called the triangular model. In this case,
the range of the image can be detected using the Z-buffer algorithm [1]. As a result, the resulting image may
include any number of faces; in that case, it depends on the need and also the imaging device. To avoid
the less computational problem, we first seek facial features such as eyes and nose. As a result, this is the first
step in image segmentation in related areas of the face features.
Currently, our goal is to distinguish the face triangle on real faces. This method is registered for
the position and standard orientation of the face and reduction of the variability of the depth of the face.
The triangle distance of the nose and eyes in the image is also calculated.
2.1. Nose and eyes identification
The HK segmentation method is used to find the nose and the inner corners of the eyes.
For a surface, with a curvature H mean and K Gaussian curvature, the facial expression can be identified.
In this case, the nose is the maximum point, and the corners of the eyes are the minimum. Based on
the average curvature of the face, and because of the leading position and hilly of the nose, it has the highest
curvature in the surrounding area, so regarding the face, the points with the maximum curvature are searched.
For this purpose, the mean surface curvature is calculated. Afterward, the areas with a greater curvature
specified and the area with the greatest curvature will be the tip of the nose. If the curvature of an area is
greater, therefore the total amount of curvature in the area will be more obvious. In addition to that, because
of the noise, Curvature enhances in one or several pixels, but the high curvature of the searched area is
curved, the filter reduces the noise and averages the effects of preventing the wrong location by calculating
the mean [9]. The image of the mean curvature, before and after the filtering is shown in Figure 1.
This method determines the position of the nose under any desired angle around the x, y, and z.
The only limitation is more likely if more than half of the nose is hidden around the y-axis. This method
utilizes the mean curvature of the face and does not take advantage of the educational and classified data.
For this reason, this method has a higher computational speed than the methods that find the nose from
different angles.
 ISSN: 2252-8814
Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332
328
Figure 1. The front part of the face (top right), the image of average curvature (top left), the front part of
the face with more light intensity (bottom left), the image of the average curvature after
filtering (bottom right)
2.2. Recognition in image rotation
The computational cost of the use of the rotating image is high. Initially, a series of areas that are
likely to locate the nose and eyes are specified, then to locate the exact area, the rotating image is extracted,
and using SVM (Support Vector Machine) classification, the exact located is specified [10, 11]. This method
needs primary data and a neural network. To determine the location of the nose, the angles around the y-axis
is quantized, and then the images rotate around the y-axis, then the nearest point to the camera is considered
as a candidate. After a full rotation around the y-axis, the nose is diagnosed. This method can identify
the nose position if the rotation is around the y-axis, and this method is not cost-effective if the rotation
happens to the other angles.
2.3. Three-dimensional face rotation
Three rotational matrices or their multiplied structure can be used to rotate three-dimensional
images around the axes of rotation [12-14]. If we assume that alpha is rotation around axis X, Beta is rotation
around axis Y, and Lambda is rotation around the axis of Z, then the formula around each axis will be as (1)
to (3). In formula (4) the result is the by multiplying form of the three matrices:
𝑅1(𝛼) = [
1
0
0
0
𝐶𝑜𝑠𝛼
𝑆𝑖𝑛𝛼
0
−𝑆𝑖𝑛𝛼
𝐶𝑜𝑠𝛼
] (1)
𝑅2(𝛽) = [
𝐶𝑜𝑠𝛽
0
−𝑆𝑖𝑛𝛽
0
1
𝑆𝑖𝑛𝛼
𝑆𝑖𝑛𝛽
0
𝐶𝑜𝑠𝛽
] (2)
𝑅3(𝛾) = [
𝐶𝑜𝑠𝛾
𝑆𝑖𝑛𝛾
0
−𝑆𝑖𝑛𝛾
1
0
0
0
1
] (3)
𝑅 = 𝑅1 ∗ 𝑅2 ∗ 𝑅3 =
[
𝐶𝑜𝑠𝛽. 𝐶𝑜𝑠𝛾
𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛽. 𝐶𝑜𝑠𝛾 + 𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛾
−𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛽. 𝐶𝑜𝑠𝛾 + 𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛾
−𝐶𝑜𝑠𝛽. 𝑆𝑖𝑛𝛾
−𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛽. 𝑆𝑖𝑛𝛾 + 𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛾
𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛽. 𝑆𝑖𝑛𝛾 + 𝑆𝑖𝑛𝛼. 𝐶𝑜𝑠𝛾
𝑆𝑖𝑛𝛽
−𝑆𝑖𝑛𝛼. 𝐶𝑜𝑠𝛽
𝐶𝑜𝑠𝛼. 𝐶𝑜𝑠𝛾
] (4)
As a result, vectors on each spin is obtained with the coordination of the rotated image.
The intensity is added to use this matrix, three-dimensional data, and two-dimensional image rotation. So,
the image rotation coordination is found. The data matrix is changed as formula (5), and the intensity was
added to the last line of image data.
Int J Adv Appl Sci ISSN: 2252-8814 
Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani)
329
𝐷 = [
𝑥1
𝑦1
𝑧1
𝐼1
𝑥2
𝑦2
𝑧2
𝐼2
𝑥3
𝑦3
𝑧3
𝐼3
𝑥4
𝑦4
𝑧4
𝐼4
𝑥5
𝑦5
𝑧5
𝐼5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
] (5)
The important point here is that two and three-dimensional images of the same size, and both must
be accurate with a complete picture of the interaction compliance. The database used; the two-dimensional
images are larger than the three-dimensional images. Therefore, first two-dimensional images are resized to
be the same size with dimensions of three-dimensional images. After that, a common point in both images is
marked, and two images are characterized by y and x overlap, then the data is available for each
coordination [15, 16]. Due to the change of the data matrix, a rotation matrix also needs to be changed.
Since the intensity data does not affect the pixel location, the rotation matrix will be as formula (6).
Where the available zero and one data are added to the rotation matrix, so the light intensity data is
transmitted without altering.
[
0
𝑅
0 0
0
0
0
1
] (6)
Then the collected data in a two-dimensional matrix are transformed into a two-dimensional rotated
image [17].
2.4. Face detection
The use of information procedure has several advantages. First of all, the first data is for the depth,
not the intensity of the light, so independent of the severity of the light or the light radiation angle or the face
size, the obtained image is fixed. Also, these data do not use the reflected light from the face. Therefore, it is
not dependent on the skin color changes affected by the makeup or sunlight and sunburn. Generally,
the three-dimensional data can be moved and rotated in the desired angle in three-dimensional space;
accordingly, the formula is extracted under the desired angle. Therefore, if the image is not at the desired
angle, it can be rotated to calculate the image at the desired angle [18-20].
The need for three-dimensional imaging cameras, which are more expensive than two-dimensional
imaging cameras, is one of the disadvantages of this method. Facial hair is one of the cases which makes
the extraction of depth data difficult [20]. Due to the detection of faces by the trinity of eyes and nose,
the system can get along with problems such as hiding the ear, hair, eyebrows, etc. (Figure 2).
Figure 2. Face detection using nose and eyes
The complete procedure can be extracted through a panoramic camera or multiple photos in
different directions.
 ISSN: 2252-8814
Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332
330
2.5. Experimental testing
The proposed method implemented on the Bosphorus database on 150 faces shows that this method
is 99.7% accurate in the diagnosis of the position of the nose, 99.3% correct diagnosis of the position of
the eyes, and 96.7% accurate diagnosis of eye position. In Table 1, the results determine the location of
the eyes, nose, and internal parts of the eyes, which were shown in the database.
Table 1. The number of nose and eyes positions which were detected
Viewing Angle Nose Detection
Left Eye
Detection
Right Eye
Detection
Front Photo 143 141 139
Rotate 25 degrees around Y 143 143 139
Low rotation
around the z-axis
143 143 141
High rotation
around the z-axis
143 143 141
Smile Picture 143 142 140
Open Mouth 141 141 140
The rotation around the x-axis (above) 143 143 141
The rotation around the x-axis
(down)
142 142
3. RESULTS AND DISCUSSION
The proposed algorithm is implemented on 14 pictures of 150 persons is shown in Table 2.
In Figure 3 the accuracy graph is plotted for different errors.
Table 2. Estimation accuracy of angle for various errors
The error z-Axis y-Axis x-Axis
10 ° error. 21 98% 99.52% 98.81%
6 ° Error 94.88% 98.81% 92.33%
3 ° Error 77.46% 95.00% 67.24%
Figure 3. Accuracy diagram for different errors
The running time of this Algorithm, regardless of the time needed to determine the location of
the nose, is less than 800 milliseconds. Moreover, this method requires no data for training. Because of
the hierarchical structure in case of the wrong estimation of the angles, the wrong estimation of the other
angles is also possible. Due to the data mapping of the Z = 0, the existence of an error in the calculation of
Int J Adv Appl Sci ISSN: 2252-8814 
Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani)
331
the angle in the direction of the z-axis is possible. If we can calculate parameters using the ellipse in three-
dimensional space, then the accuracy can be increased. Also, if the angle along the x-axis and y can be
determined from the original image along with the angle determination, then the Speed and error of
the method will be decreased.
One of the most important capabilities of the human visual system is the ability to recognize
the face, which plays an important role in the social life of each person. Our relationship with other people is
based on our ability to identify them. One of the most prominent aspects of the human face recognition
system is its high accuracy and reliability, but all human beings do face recognition easily and without much
effort, while this is not an easy task for computers. The main purpose of this study is to study the effect of
selecting the appropriate features of three-dimensional images for face recognition. The results show
the success of this method in identifying faces with different characteristics. Different approaches so far are
summarized in Table. 3.
Table 3. Comparison of classification accuracy of different methods
Name 3D face database
Maximum
Accuracy
Blanz et al [21] FRGC 92%
Dibeklioglu et al [22] Bosphorus 79.41%
Mahmood et al [23] GavabDB 90%
Berretti et al [24] UND FRGC v2.0 GavabDB 82.1%
Perakis et al [25] UND –
Ding et al [26] LFW 92.95%
Proposed method Bosphorus 99.52%
Due to the sensitive nature of face detection, high precision is of the utmost importance.
Obtained results show that the open mouth increases the possibility of the loss of the nose. Considering that
this method is not only to identify the nose and eyes identification are also considered, so this weakness is
covered, but in the situation in which both eyes are covered or somehow that part of the image is lost or if
the mouth is open; the identification will be less than when the mouth is closed.
4. CONCLUSION
Human face recognition by machine through 3D images has become an active research phase in
image processing communities, pattern recognition, neural networks, and computer vision.
The face plays an essential role in identifying people and showing their feelings at the community
level. The human ability to recognize faces is remarkable. We can recognize thousands of memorable faces
in our lifetime and, at a glance, recognize familiar faces even after years of separation. Face recognition
has become an important issue in applications such as security systems, credit card control and
crime identification.
In this article, a new method to face recognition automatically using 3D face images was presented.
The geometric features of the face were used to identify faces with the help of neural networks. The results
showed. The results of using the proposed method in this paper with the help of Bosphorus database showed
the highest percentage of accuracy compared to previous methods.
ACKNOWLEDGMENT
I would like to express my deepest appreciation to Professor Bulent Esnikor from Boğaziçi
University for providing me with significant data that greatly assisted the research.
REFERENCES
[1] Xu C., Tan T., and Wang Y., Quan L., “Combining local features for robust nose location in 3D facial data,”
Pattern Recognition Letters, vol. 27, no. 13, pp. 62-73, 2006.
[2] Dibeklioglu H., Part-based 3D face recognition under pose and expression variations, Master's thesis, Bo_gazi_ci
University, 2008.
[3] Dibeklioglu H., Salah A., Akarun L., “3D facial landmarking under expression, pose, and occlusion variations,” In
Proc. 2nd IEEE International Conference on Biometrics: Theory, Applications and Systems, pp. 1- 6, 2008.
[4] Romero-Huertas M., Pears N., “3D facial landmark localization by matching simple descriptors,” In Proc. 2nd
IEEE International Conference on Biometrics: Theory, Applications and Systems, 2008.
 ISSN: 2252-8814
Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332
332
[5] Perakis P., Passalis G., Theoharis T., Toderici G., Kakadiaris I., “Partial matching of interpose 3D facial data for
face recognition,” In Proc. 3rd IEEE International Conference on Biometrics:Theory, Applications and Systems,
pp. 439-446, 2009.
[6] Perakis P., Theoharis T., Passalis G., Kakadiaris I., “Automatic 3D facial region retrieval from multi-pose facial
datasets, In Proc. Eurographics Workshop on 3D Object Retrieval, pp. 37-44, 2009.
[7] Xie W, Shen L, Yang M, Lai Z., “Active AU based patch weighting for facial expression recognition,” Sensors
(Basel), vol. 17,pp. 275, 2017.
[8] S. Berretti, A. Del Bimbo, P. Pala, “Sparse matching of salient facial curves for recognition of 3D faces with
missing parts,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 2, pp.374-389, 2013.
[9] Andrew J.Logan, Gael E.Gordon, GunterLoffler, “Contributions of individual face features to face discrimination,”
Vision Research, vol. 137, pp. 29-39, 2017.
[10] S. Ravi, S. Wilson, “Face detection with facial features and gender classification based on support vector
machine,” 2010 IEEE International Conference on Computational Intelligence and computing Research, ISBN:
97881 8371 3627, 2010
[11] T. Verma and R. K. Sahu, “PCA-LDA based face recognition system & results comparison by various
classification techniques,” 2013 International Conference on Green High Performance Computing (ICGHPC),
Nagercoil, pp. 1-7, 2013.
[12] Zhou, S., Xiao, S. “3D face recognition: a survey.” Hum. Cent. Comput. Inf. Sci. vol. 8, pp. 35, 2018.
[13] Borade S.N., Deshmukh R.R., Shrishrimal P., “Effect of distance measures on the performance of face recognition
using principal component analysis,” In: Berretti S., Thampi S., Srivastava P. (eds) “Intelligent systems
technologies and applications,” Advances in Intelligent Systems and Computing, vol. 384, 2016.
[14] S.M.S. Islam, R. Davies, M. Bennamoun, R. "Owens and Ajmal Mian, “Multibiometric human recognition using
3D ear and face features,” Pattern Recognition, vol. 46, no. 3, pp. 613-627, 2013.
[15] X.Zhao, W.Zhang, G.Evangelopoulos, D.Huang, S.Shah, Y.Wang, I.Kakadiaris and L. Chen, “Benchmarking
asymmetric 3D-2D face recognition systems,” Proc. 3D Face Biometrics Workshop, in conjuction with IEEE Int’l
Conf. on Automatic Face and Gesture Recognition (FG), 2013.
[16] Yueming Wang, Gang Pan, Jianzhuan Liu, “A deformation model to reduce the effect of expressions in 3D face
recognition,” Visual Computer, vol. 27, no. 5, pp. 333-345, 2011.
[17] A. Watt, M. Watt, Advanced Animation, and Rendering Techniques: Theory and Practice, Addison-Wesley,
Reading, MA, 1992.
[18] A. Moreno, A. Sanchez, J. Velez, F. Diaz, “Face recognition using 3D surface extracted descriptors,” Proceedings
of the Irish Machine Vision and Image Processing, 2004.
[19] Naeem Iqbal Ratyal, Imtiaz Ahmad Taj, Muhammad Sajid, Nouman Ali, Anzar Mahmood and Sohail Razzaq,
“Three-dimensional face recognition using variance-based registration and subject-specific descriptors,”
International Journal of Advanced Robotic Systems, pp. 1-16, 2019.
[20] Miller, P., and Lyle, J. The effect of distance measures on the recognition rates of PCA and LDA based facial
recognition Tech. rep., Clemson University, 2008.
[21] Blanz V, Scherbaum K, Seidel HP, “Fitting a morphable model to 3D scans of faces,” In: Computer vision,
pp. 1-8, 2007.
[22] Dibeklioglu H, Salah AA, Akarun L., “3D facial landmarking under expression, pose and occlusion variations,” In:
Biometrics theory, applications and systems, pp. 1-6, 2008.
[23] Mahmood SA, Ghani RF, Kerim AA., “3D face recognition using pose invariant nose region detector,” In:
Computer science and electronic engineering conference, 2014.
[24] Berretti S, Del Bimbo PPA., “Sparse matching of salient facial curves for recognition of 3D faces with missing
parts,” Forensics Secu, vol. 8, pp. 374-389, 2013.
[25] Perakis P, Passalis G, Theoharis T, Toderici G, Kakadiaris IA., “Partial matching of interpose 3D facial data for
face recognition,” In: Biometrics: theory, applications, and systems, pp. 439-446, 2009.
[26] Hua WG., “Implicit elastic matching with random projections for pose-variant face recognition,” In: Comput. Vis.
Pattern Recognit, pp. 1502-1509, 2009.

Weitere ähnliche Inhalte

Was ist angesagt?

A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUEScsandit
 
Brain Tumor Area Calculation in CT-scan image using Morphological Operations
Brain Tumor Area Calculation in CT-scan image using Morphological OperationsBrain Tumor Area Calculation in CT-scan image using Morphological Operations
Brain Tumor Area Calculation in CT-scan image using Morphological Operationsiosrjce
 
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...A Parametric Approach to Gait Signature Extraction for Human Motion Identific...
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...CSCJournals
 
Asymmetrical Half-Join Method on Dual Vision Face Recognition
Asymmetrical Half-Join Method on Dual Vision Face Recognition Asymmetrical Half-Join Method on Dual Vision Face Recognition
Asymmetrical Half-Join Method on Dual Vision Face Recognition IJECEIAES
 
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...AM Publications
 
Diabetic maculopathy detection using fundus fluorescein angiogram images a ...
Diabetic maculopathy detection using fundus fluorescein angiogram images   a ...Diabetic maculopathy detection using fundus fluorescein angiogram images   a ...
Diabetic maculopathy detection using fundus fluorescein angiogram images a ...eSAT Publishing House
 
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-images
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-imagesA novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-images
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-imagesSrinivasaRaoNaga
 
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...TELKOMNIKA JOURNAL
 
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryUse of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryMinh Anh Nguyen
 
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...CSCJournals
 
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryUse of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryMinh Anh Nguyen
 
Reconstruct 3 d human face using two orthogonal images
Reconstruct 3 d human face using two orthogonal imagesReconstruct 3 d human face using two orthogonal images
Reconstruct 3 d human face using two orthogonal imageseSAT Publishing House
 
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...CSCJournals
 
Detection of Macular Edema by using Various Techniques of Feature Extraction ...
Detection of Macular Edema by using Various Techniques of Feature Extraction ...Detection of Macular Edema by using Various Techniques of Feature Extraction ...
Detection of Macular Edema by using Various Techniques of Feature Extraction ...IRJET Journal
 
Cross correlation analysis of
Cross correlation analysis ofCross correlation analysis of
Cross correlation analysis ofcsandit
 
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...IJCSES Journal
 

Was ist angesagt? (18)

A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
 
Brain Tumor Area Calculation in CT-scan image using Morphological Operations
Brain Tumor Area Calculation in CT-scan image using Morphological OperationsBrain Tumor Area Calculation in CT-scan image using Morphological Operations
Brain Tumor Area Calculation in CT-scan image using Morphological Operations
 
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...A Parametric Approach to Gait Signature Extraction for Human Motion Identific...
A Parametric Approach to Gait Signature Extraction for Human Motion Identific...
 
Asymmetrical Half-Join Method on Dual Vision Face Recognition
Asymmetrical Half-Join Method on Dual Vision Face Recognition Asymmetrical Half-Join Method on Dual Vision Face Recognition
Asymmetrical Half-Join Method on Dual Vision Face Recognition
 
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...
CHEST X-RAY SEGMENTATION to CALCULATE PLEURAL EFFUSION INDEX in PATIENT with ...
 
Diabetic maculopathy detection using fundus fluorescein angiogram images a ...
Diabetic maculopathy detection using fundus fluorescein angiogram images   a ...Diabetic maculopathy detection using fundus fluorescein angiogram images   a ...
Diabetic maculopathy detection using fundus fluorescein angiogram images a ...
 
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-images
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-imagesA novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-images
A novel-approach-for-retinal-lesion-detection-indiabetic-retinopathy-images
 
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...
 
50120140504002
5012014050400250120140504002
50120140504002
 
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryUse of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
 
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...
New Noise Reduction Technique for Medical Ultrasound Imaging using Gabor Filt...
 
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug deliveryUse of spatial frequency domain imaging (sfdi) to quantify drug delivery
Use of spatial frequency domain imaging (sfdi) to quantify drug delivery
 
Reconstruct 3 d human face using two orthogonal images
Reconstruct 3 d human face using two orthogonal imagesReconstruct 3 d human face using two orthogonal images
Reconstruct 3 d human face using two orthogonal images
 
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
Brain Tumor Extraction from T1- Weighted MRI using Co-clustering and Level Se...
 
Detection of Macular Edema by using Various Techniques of Feature Extraction ...
Detection of Macular Edema by using Various Techniques of Feature Extraction ...Detection of Macular Edema by using Various Techniques of Feature Extraction ...
Detection of Macular Edema by using Various Techniques of Feature Extraction ...
 
Cross correlation analysis of
Cross correlation analysis ofCross correlation analysis of
Cross correlation analysis of
 
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...
AUTOMATED DETECTION OF HARD EXUDATES IN FUNDUS IMAGES USING IMPROVED OTSU THR...
 
HMM BASED PARKINSON’S DETECTION BY ANALYSING SYMBOLIC POSTURAL GAIT IMAGE SEQ...
HMM BASED PARKINSON’S DETECTION BY ANALYSING SYMBOLIC POSTURAL GAIT IMAGE SEQ...HMM BASED PARKINSON’S DETECTION BY ANALYSING SYMBOLIC POSTURAL GAIT IMAGE SEQ...
HMM BASED PARKINSON’S DETECTION BY ANALYSING SYMBOLIC POSTURAL GAIT IMAGE SEQ...
 

Ähnlich wie Powerful processing to three-dimensional facial recognition using triple information

Proposition of local automatic algorithm for landmark detection in 3D cephalo...
Proposition of local automatic algorithm for landmark detection in 3D cephalo...Proposition of local automatic algorithm for landmark detection in 3D cephalo...
Proposition of local automatic algorithm for landmark detection in 3D cephalo...journalBEEI
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Editor IJARCET
 
3-D Face Recognition Using Improved 3D Mixed Transform
3-D Face Recognition Using Improved 3D Mixed Transform3-D Face Recognition Using Improved 3D Mixed Transform
3-D Face Recognition Using Improved 3D Mixed TransformCSCJournals
 
An Ear Recognition Method Based on Rotation Invariant Transformed DCT
An Ear Recognition Method Based on Rotation Invariant  Transformed DCT An Ear Recognition Method Based on Rotation Invariant  Transformed DCT
An Ear Recognition Method Based on Rotation Invariant Transformed DCT IJECEIAES
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsijma
 
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
 
Vol 5 No 1 - September 2013
Vol 5 No 1 - September 2013Vol 5 No 1 - September 2013
Vol 5 No 1 - September 2013ijcsbi
 
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...sipij
 
Gender Discrimination based on the Thermal Signature of the Face and the Exte...
Gender Discrimination based on the Thermal Signature of the Face and the Exte...Gender Discrimination based on the Thermal Signature of the Face and the Exte...
Gender Discrimination based on the Thermal Signature of the Face and the Exte...sipij
 
Adopting level set theory based algorithms to segment human ear
Adopting level set theory based algorithms to segment human earAdopting level set theory based algorithms to segment human ear
Adopting level set theory based algorithms to segment human earIJCI JOURNAL
 
Face Recognition Research Report
Face Recognition Research ReportFace Recognition Research Report
Face Recognition Research ReportSandeep Garg
 
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...IJERA Editor
 
A robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsA robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsijcseit
 
Brain tumor visualization for magnetic resonance images using modified shape...
Brain tumor visualization for magnetic resonance images using  modified shape...Brain tumor visualization for magnetic resonance images using  modified shape...
Brain tumor visualization for magnetic resonance images using modified shape...IJECEIAES
 
Reeb Graph for Automatic 3D Cephalometry
Reeb Graph for Automatic 3D CephalometryReeb Graph for Automatic 3D Cephalometry
Reeb Graph for Automatic 3D CephalometryCSCJournals
 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition Systemijscai
 
Feature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyFeature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyShashank Dhariwal
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
 

Ähnlich wie Powerful processing to three-dimensional facial recognition using triple information (20)

Proposition of local automatic algorithm for landmark detection in 3D cephalo...
Proposition of local automatic algorithm for landmark detection in 3D cephalo...Proposition of local automatic algorithm for landmark detection in 3D cephalo...
Proposition of local automatic algorithm for landmark detection in 3D cephalo...
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356
 
3-D Face Recognition Using Improved 3D Mixed Transform
3-D Face Recognition Using Improved 3D Mixed Transform3-D Face Recognition Using Improved 3D Mixed Transform
3-D Face Recognition Using Improved 3D Mixed Transform
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
An Ear Recognition Method Based on Rotation Invariant Transformed DCT
An Ear Recognition Method Based on Rotation Invariant  Transformed DCT An Ear Recognition Method Based on Rotation Invariant  Transformed DCT
An Ear Recognition Method Based on Rotation Invariant Transformed DCT
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithms
 
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
 
Vol 5 No 1 - September 2013
Vol 5 No 1 - September 2013Vol 5 No 1 - September 2013
Vol 5 No 1 - September 2013
 
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...
GENDER DISCRIMINATION BASED ON THE THERMAL SIGNATURE OF THE FACE AND THE EXTE...
 
Gender Discrimination based on the Thermal Signature of the Face and the Exte...
Gender Discrimination based on the Thermal Signature of the Face and the Exte...Gender Discrimination based on the Thermal Signature of the Face and the Exte...
Gender Discrimination based on the Thermal Signature of the Face and the Exte...
 
Adopting level set theory based algorithms to segment human ear
Adopting level set theory based algorithms to segment human earAdopting level set theory based algorithms to segment human ear
Adopting level set theory based algorithms to segment human ear
 
Face Recognition Research Report
Face Recognition Research ReportFace Recognition Research Report
Face Recognition Research Report
 
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
 
A robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsA robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditions
 
Brain tumor visualization for magnetic resonance images using modified shape...
Brain tumor visualization for magnetic resonance images using  modified shape...Brain tumor visualization for magnetic resonance images using  modified shape...
Brain tumor visualization for magnetic resonance images using modified shape...
 
Reeb Graph for Automatic 3D Cephalometry
Reeb Graph for Automatic 3D CephalometryReeb Graph for Automatic 3D Cephalometry
Reeb Graph for Automatic 3D Cephalometry
 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition System
 
Feature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyFeature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A Survey
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
 

Mehr von IJAAS Team

A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...IJAAS Team
 
Lossless 4D Medical Images Compression Using Adaptive Inter Slices Filtering
Lossless 4D Medical Images Compression Using Adaptive Inter Slices FilteringLossless 4D Medical Images Compression Using Adaptive Inter Slices Filtering
Lossless 4D Medical Images Compression Using Adaptive Inter Slices FilteringIJAAS Team
 
Coding Schemes for Implementation of Fault Tolerant Parrallel Filter
Coding Schemes for Implementation of Fault Tolerant Parrallel FilterCoding Schemes for Implementation of Fault Tolerant Parrallel Filter
Coding Schemes for Implementation of Fault Tolerant Parrallel FilterIJAAS Team
 
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...Recycling of Industrial Waste Water for the Generation of Electricity by Regu...
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...IJAAS Team
 
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...IJAAS Team
 
Automation of DMPS Manufacturing by using LabView & PLC
Automation of DMPS Manufacturing by using LabView & PLCAutomation of DMPS Manufacturing by using LabView & PLC
Automation of DMPS Manufacturing by using LabView & PLCIJAAS Team
 
Mobile Application Development with Android
Mobile Application Development with AndroidMobile Application Development with Android
Mobile Application Development with AndroidIJAAS Team
 
Data Visualization and Analysis of Engineering Educational Statisticsx
Data Visualization and Analysis of Engineering Educational StatisticsxData Visualization and Analysis of Engineering Educational Statisticsx
Data Visualization and Analysis of Engineering Educational StatisticsxIJAAS Team
 
Requirement Elicitation Model (REM) in the Context of Global Software Develop...
Requirement Elicitation Model (REM) in the Context of Global Software Develop...Requirement Elicitation Model (REM) in the Context of Global Software Develop...
Requirement Elicitation Model (REM) in the Context of Global Software Develop...IJAAS Team
 
Mobile Learning Technologies
Mobile Learning TechnologiesMobile Learning Technologies
Mobile Learning TechnologiesIJAAS Team
 
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...IJAAS Team
 
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...IJAAS Team
 
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...IJAAS Team
 
Energy and Load Aware Routing Protocol for Internet of Things
Energy and Load Aware Routing Protocol for Internet of ThingsEnergy and Load Aware Routing Protocol for Internet of Things
Energy and Load Aware Routing Protocol for Internet of ThingsIJAAS Team
 
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...IJAAS Team
 
Design of an IOT based Online Monitoring Digital Stethoscope
Design of an IOT based Online Monitoring Digital StethoscopeDesign of an IOT based Online Monitoring Digital Stethoscope
Design of an IOT based Online Monitoring Digital StethoscopeIJAAS Team
 
Development of Russian Driverless Electric Vehicle
Development of Russian Driverless Electric VehicleDevelopment of Russian Driverless Electric Vehicle
Development of Russian Driverless Electric VehicleIJAAS Team
 
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...IJAAS Team
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyIJAAS Team
 
CP-NR Distributed Range Free Localization Algorithm in WSN
CP-NR Distributed Range Free Localization Algorithm in WSNCP-NR Distributed Range Free Localization Algorithm in WSN
CP-NR Distributed Range Free Localization Algorithm in WSNIJAAS Team
 

Mehr von IJAAS Team (20)

A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
 
Lossless 4D Medical Images Compression Using Adaptive Inter Slices Filtering
Lossless 4D Medical Images Compression Using Adaptive Inter Slices FilteringLossless 4D Medical Images Compression Using Adaptive Inter Slices Filtering
Lossless 4D Medical Images Compression Using Adaptive Inter Slices Filtering
 
Coding Schemes for Implementation of Fault Tolerant Parrallel Filter
Coding Schemes for Implementation of Fault Tolerant Parrallel FilterCoding Schemes for Implementation of Fault Tolerant Parrallel Filter
Coding Schemes for Implementation of Fault Tolerant Parrallel Filter
 
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...Recycling of Industrial Waste Water for the Generation of Electricity by Regu...
Recycling of Industrial Waste Water for the Generation of Electricity by Regu...
 
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...
Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fu...
 
Automation of DMPS Manufacturing by using LabView & PLC
Automation of DMPS Manufacturing by using LabView & PLCAutomation of DMPS Manufacturing by using LabView & PLC
Automation of DMPS Manufacturing by using LabView & PLC
 
Mobile Application Development with Android
Mobile Application Development with AndroidMobile Application Development with Android
Mobile Application Development with Android
 
Data Visualization and Analysis of Engineering Educational Statisticsx
Data Visualization and Analysis of Engineering Educational StatisticsxData Visualization and Analysis of Engineering Educational Statisticsx
Data Visualization and Analysis of Engineering Educational Statisticsx
 
Requirement Elicitation Model (REM) in the Context of Global Software Develop...
Requirement Elicitation Model (REM) in the Context of Global Software Develop...Requirement Elicitation Model (REM) in the Context of Global Software Develop...
Requirement Elicitation Model (REM) in the Context of Global Software Develop...
 
Mobile Learning Technologies
Mobile Learning TechnologiesMobile Learning Technologies
Mobile Learning Technologies
 
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...
Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communica...
 
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...
An Intuitionistic Fuzzy Sets Implementation for Key Distribution in Hybrid Me...
 
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...
Angular Symmetric Axis Constellation Model for Off-line Odia Handwritten Char...
 
Energy and Load Aware Routing Protocol for Internet of Things
Energy and Load Aware Routing Protocol for Internet of ThingsEnergy and Load Aware Routing Protocol for Internet of Things
Energy and Load Aware Routing Protocol for Internet of Things
 
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...
Analysis and Implementation of Unipolar PWM Strategies for Three Phase Cascad...
 
Design of an IOT based Online Monitoring Digital Stethoscope
Design of an IOT based Online Monitoring Digital StethoscopeDesign of an IOT based Online Monitoring Digital Stethoscope
Design of an IOT based Online Monitoring Digital Stethoscope
 
Development of Russian Driverless Electric Vehicle
Development of Russian Driverless Electric VehicleDevelopment of Russian Driverless Electric Vehicle
Development of Russian Driverless Electric Vehicle
 
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...
Cost Allocation of Reactive Power Using Matrix Methodology in Transmission Ne...
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a Survey
 
CP-NR Distributed Range Free Localization Algorithm in WSN
CP-NR Distributed Range Free Localization Algorithm in WSNCP-NR Distributed Range Free Localization Algorithm in WSN
CP-NR Distributed Range Free Localization Algorithm in WSN
 

Kürzlich hochgeladen

Digital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptxDigital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptxMohamedFarag457087
 
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...Scintica Instrumentation
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsSérgio Sacani
 
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....muralinath2
 
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxClimate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxDiariAli
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learninglevieagacer
 
Grade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsGrade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsOrtegaSyrineMay
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceAlex Henderson
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professormuralinath2
 
GBSN - Microbiology (Unit 3)Defense Mechanism of the body
GBSN - Microbiology (Unit 3)Defense Mechanism of the body GBSN - Microbiology (Unit 3)Defense Mechanism of the body
GBSN - Microbiology (Unit 3)Defense Mechanism of the body Areesha Ahmad
 
Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.Silpa
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bSérgio Sacani
 
LUNULARIA -features, morphology, anatomy ,reproduction etc.
LUNULARIA -features, morphology, anatomy ,reproduction etc.LUNULARIA -features, morphology, anatomy ,reproduction etc.
LUNULARIA -features, morphology, anatomy ,reproduction etc.Silpa
 
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxTHE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxANSARKHAN96
 
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.Silpa
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Silpa
 
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRL
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRLGwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRL
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRLkantirani197
 
CYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxCYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxSilpa
 
Genetics and epigenetics of ADHD and comorbid conditions
Genetics and epigenetics of ADHD and comorbid conditionsGenetics and epigenetics of ADHD and comorbid conditions
Genetics and epigenetics of ADHD and comorbid conditionsbassianu17
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY1301aanya
 

Kürzlich hochgeladen (20)

Digital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptxDigital Dentistry.Digital Dentistryvv.pptx
Digital Dentistry.Digital Dentistryvv.pptx
 
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
 
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptxClimate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
Climate Change Impacts on Terrestrial and Aquatic Ecosystems.pptx
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 
Grade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsGrade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its Functions
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical Science
 
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate ProfessorThyroid Physiology_Dr.E. Muralinath_ Associate Professor
Thyroid Physiology_Dr.E. Muralinath_ Associate Professor
 
GBSN - Microbiology (Unit 3)Defense Mechanism of the body
GBSN - Microbiology (Unit 3)Defense Mechanism of the body GBSN - Microbiology (Unit 3)Defense Mechanism of the body
GBSN - Microbiology (Unit 3)Defense Mechanism of the body
 
Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
LUNULARIA -features, morphology, anatomy ,reproduction etc.
LUNULARIA -features, morphology, anatomy ,reproduction etc.LUNULARIA -features, morphology, anatomy ,reproduction etc.
LUNULARIA -features, morphology, anatomy ,reproduction etc.
 
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptxTHE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
THE ROLE OF BIOTECHNOLOGY IN THE ECONOMIC UPLIFT.pptx
 
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.
 
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRL
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRLGwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRL
Gwalior ❤CALL GIRL 84099*07087 ❤CALL GIRLS IN Gwalior ESCORT SERVICE❤CALL GIRL
 
CYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptxCYTOGENETIC MAP................ ppt.pptx
CYTOGENETIC MAP................ ppt.pptx
 
Genetics and epigenetics of ADHD and comorbid conditions
Genetics and epigenetics of ADHD and comorbid conditionsGenetics and epigenetics of ADHD and comorbid conditions
Genetics and epigenetics of ADHD and comorbid conditions
 
biology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGYbiology HL practice questions IB BIOLOGY
biology HL practice questions IB BIOLOGY
 

Powerful processing to three-dimensional facial recognition using triple information

  • 1. International Journal of Advances in Applied Sciences (IJAAS) Vol. 9, No. 4, December 2020, pp. 326~332 ISSN: 2252-8814, DOI: 10.11591/ijaas.v9.i4.pp326-332  326 Journal homepage: http://ijaas.iaescore.com Powerful processing to three-dimensional facial recognition using triple information Mohammad Karimi Moridani1 , Ahad Karimi Moridani2 , Mahin Gholipour3 1,3 Department of Biomedical Engineering, Faculty of Health, Tehran Medical Sciences, Islamic Azad University, Iran 2 Technologies de L'information Collège CDI, Canada Article Info ABSTRACT Article history: Received Apr 2, 2020 Revised Jun 11, 2020 Accepted Jul 9, 2020 Face Detection plays a crucial role in identifying individuals and criminals in Security, surveillance, and footwork control systems. Face Recognition in the human is superb, and pictures can be easily identified even after years of separation. These abilities also apply to changes in a facial expression such as age, glasses, beard, or little change in the face. This method is based on 150 three-dimensional images using the Bosphorus database of a high range laser scanner in a Bogaziçi University in Turkey. This paper presents powerful processing for face recognition based on a combination of the salient information and features of the face, such as eyes and nose, for the detection of three-dimensional figures identified through analysis of surface curvature. The Trinity of the nose and two eyes were selected for applying principal component analysis algorithm and support vector machine to revealing and classification the difference between face and non-face. The results with different facial expressions and extracted from different angles have indicated the efficiency of our powerful processing. Keywords: 3D image Facial information Information processing, Mean and gaussian curvature SVM This is an open access article under the CC BY-SA license. Corresponding Author: Mohammad Karimi Moridani, Department of Biomedical Engineering, Faculty of Health, Tehran Medical Sciences, Islamic Azad University, Tehran Province, Tehran, Danesh Rd, Iran. Email: karimi.m@iautmu.ac.ir 1. INTRODUCTION Face detection has great importance for many as groups. The crucial applications of Face Detection in areas such as human-computer interface, human authentication, criminals face detection, face monitoring systems is undeniable. In all methods proposed to determine the position of specific points in three- dimensional images, the nose is regarded as a typical point, and the rest of the points can be located. To determine the location of the nose, several methods were proposed. In some methods, the nearest position of the camera is regarded as the nose, and then the other points are extracted based on the nose position. This assumption is true in the case of no rotation around the x and y-axis; in fact, the image must be on the front side. Xu et al. [1] introduced a feature extraction hierarchical plan to identify the positions of the nose tip and nose ridge. They presented an effective energy idea to show the local distribution of neighboring points and detect the candidate nose tips. In the end, a support vector machine (SVM) classifier was used to choose the right nose tips. Dibeklio˘glu et al. [2, 3] displayed strategies for recognizing facial information on 3D facial datasets to empower posture amendment under important posture varieties.
  • 2. Int J Adv Appl Sci ISSN: 2252-8814  Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani) 327 They presented a statistical strategy to distinguish facial elements using the gradient of the depth map based on preparing a model of local elements. The technique was tried against the FRGC v1 and the Bosphorus databases; however, information with posture varieties was not taken into the investigation. They likewise presented a nose tip localization and segmentation technique utilizing curvature- based heuristic investigation. Also, even though the Bosphorus database utilized comprises of 3,396 facial scans, they are acquired from 81 subjects. At last, no correct localization distance error results were displayed. In other research, Romero-Huertas et al. [4] introduced a graph matching method to determine the places of the nose tip and internal eye corners. They presented the distance to local plane idea to portray the local distribution of neighboring points and identify convex and concave regions of the face. At long last, after the graph matching method has wiped out false landmarks, according to the minimum Mahalanobis distance, the best combination of landmarks points is chosen to the trained landmark diagram model. The technique was tried against FRGC v1 and FRGC v2 databases, including 509 scans and 3271 scans, respectively. They detailed a considerable rate of 90% with thresholds for the nose tip at 15 mm, and for the internal eye corners at 12 mm. At last, Perakis et al. [5, 6] displayed techniques for distinguishing important facial points such as eye inner and outer corners, mouth corners, and nose and chin tips) based on 2.5D scans. Neighborhood shape and curvature processing using shape index, expulsion maps, and turn pictures were utilized to find candidate landmark points. These are determined and marked by coordinating them with a statistical facial landmark model [7, 8]. The goal of this paper is to find out an efficient face registration method based on triple.Then we will introduce the problems related to the existing systems in face recognition and possible research ways that help to solve these issues. Three dimensional methods could provide better robustness to create diversity than 2D-based methods. Automatic face recognition detection has become one of the most important research topics in image processing due to its application in human-robot interactions, pain detection, lie detection, and other psychological and medical applications. 2. MATERIAL AND METHOD In this method, it is assumed that a true display of a three-dimensional image as a piece of the image for each location (j, i) coordinates (X, Y, Z) the three-dimensional scene. Some of the images received from the device illustrate the data on a form of polygon models, usually called the triangular model. In this case, the range of the image can be detected using the Z-buffer algorithm [1]. As a result, the resulting image may include any number of faces; in that case, it depends on the need and also the imaging device. To avoid the less computational problem, we first seek facial features such as eyes and nose. As a result, this is the first step in image segmentation in related areas of the face features. Currently, our goal is to distinguish the face triangle on real faces. This method is registered for the position and standard orientation of the face and reduction of the variability of the depth of the face. The triangle distance of the nose and eyes in the image is also calculated. 2.1. Nose and eyes identification The HK segmentation method is used to find the nose and the inner corners of the eyes. For a surface, with a curvature H mean and K Gaussian curvature, the facial expression can be identified. In this case, the nose is the maximum point, and the corners of the eyes are the minimum. Based on the average curvature of the face, and because of the leading position and hilly of the nose, it has the highest curvature in the surrounding area, so regarding the face, the points with the maximum curvature are searched. For this purpose, the mean surface curvature is calculated. Afterward, the areas with a greater curvature specified and the area with the greatest curvature will be the tip of the nose. If the curvature of an area is greater, therefore the total amount of curvature in the area will be more obvious. In addition to that, because of the noise, Curvature enhances in one or several pixels, but the high curvature of the searched area is curved, the filter reduces the noise and averages the effects of preventing the wrong location by calculating the mean [9]. The image of the mean curvature, before and after the filtering is shown in Figure 1. This method determines the position of the nose under any desired angle around the x, y, and z. The only limitation is more likely if more than half of the nose is hidden around the y-axis. This method utilizes the mean curvature of the face and does not take advantage of the educational and classified data. For this reason, this method has a higher computational speed than the methods that find the nose from different angles.
  • 3.  ISSN: 2252-8814 Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332 328 Figure 1. The front part of the face (top right), the image of average curvature (top left), the front part of the face with more light intensity (bottom left), the image of the average curvature after filtering (bottom right) 2.2. Recognition in image rotation The computational cost of the use of the rotating image is high. Initially, a series of areas that are likely to locate the nose and eyes are specified, then to locate the exact area, the rotating image is extracted, and using SVM (Support Vector Machine) classification, the exact located is specified [10, 11]. This method needs primary data and a neural network. To determine the location of the nose, the angles around the y-axis is quantized, and then the images rotate around the y-axis, then the nearest point to the camera is considered as a candidate. After a full rotation around the y-axis, the nose is diagnosed. This method can identify the nose position if the rotation is around the y-axis, and this method is not cost-effective if the rotation happens to the other angles. 2.3. Three-dimensional face rotation Three rotational matrices or their multiplied structure can be used to rotate three-dimensional images around the axes of rotation [12-14]. If we assume that alpha is rotation around axis X, Beta is rotation around axis Y, and Lambda is rotation around the axis of Z, then the formula around each axis will be as (1) to (3). In formula (4) the result is the by multiplying form of the three matrices: 𝑅1(𝛼) = [ 1 0 0 0 𝐶𝑜𝑠𝛼 𝑆𝑖𝑛𝛼 0 −𝑆𝑖𝑛𝛼 𝐶𝑜𝑠𝛼 ] (1) 𝑅2(𝛽) = [ 𝐶𝑜𝑠𝛽 0 −𝑆𝑖𝑛𝛽 0 1 𝑆𝑖𝑛𝛼 𝑆𝑖𝑛𝛽 0 𝐶𝑜𝑠𝛽 ] (2) 𝑅3(𝛾) = [ 𝐶𝑜𝑠𝛾 𝑆𝑖𝑛𝛾 0 −𝑆𝑖𝑛𝛾 1 0 0 0 1 ] (3) 𝑅 = 𝑅1 ∗ 𝑅2 ∗ 𝑅3 = [ 𝐶𝑜𝑠𝛽. 𝐶𝑜𝑠𝛾 𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛽. 𝐶𝑜𝑠𝛾 + 𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛾 −𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛽. 𝐶𝑜𝑠𝛾 + 𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛾 −𝐶𝑜𝑠𝛽. 𝑆𝑖𝑛𝛾 −𝑆𝑖𝑛𝛼. 𝑆𝑖𝑛𝛽. 𝑆𝑖𝑛𝛾 + 𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛾 𝐶𝑜𝑠𝛼. 𝑆𝑖𝑛𝛽. 𝑆𝑖𝑛𝛾 + 𝑆𝑖𝑛𝛼. 𝐶𝑜𝑠𝛾 𝑆𝑖𝑛𝛽 −𝑆𝑖𝑛𝛼. 𝐶𝑜𝑠𝛽 𝐶𝑜𝑠𝛼. 𝐶𝑜𝑠𝛾 ] (4) As a result, vectors on each spin is obtained with the coordination of the rotated image. The intensity is added to use this matrix, three-dimensional data, and two-dimensional image rotation. So, the image rotation coordination is found. The data matrix is changed as formula (5), and the intensity was added to the last line of image data.
  • 4. Int J Adv Appl Sci ISSN: 2252-8814  Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani) 329 𝐷 = [ 𝑥1 𝑦1 𝑧1 𝐼1 𝑥2 𝑦2 𝑧2 𝐼2 𝑥3 𝑦3 𝑧3 𝐼3 𝑥4 𝑦4 𝑧4 𝐼4 𝑥5 𝑦5 𝑧5 𝐼5 . . . . . . . . . . . . . . . . ] (5) The important point here is that two and three-dimensional images of the same size, and both must be accurate with a complete picture of the interaction compliance. The database used; the two-dimensional images are larger than the three-dimensional images. Therefore, first two-dimensional images are resized to be the same size with dimensions of three-dimensional images. After that, a common point in both images is marked, and two images are characterized by y and x overlap, then the data is available for each coordination [15, 16]. Due to the change of the data matrix, a rotation matrix also needs to be changed. Since the intensity data does not affect the pixel location, the rotation matrix will be as formula (6). Where the available zero and one data are added to the rotation matrix, so the light intensity data is transmitted without altering. [ 0 𝑅 0 0 0 0 0 1 ] (6) Then the collected data in a two-dimensional matrix are transformed into a two-dimensional rotated image [17]. 2.4. Face detection The use of information procedure has several advantages. First of all, the first data is for the depth, not the intensity of the light, so independent of the severity of the light or the light radiation angle or the face size, the obtained image is fixed. Also, these data do not use the reflected light from the face. Therefore, it is not dependent on the skin color changes affected by the makeup or sunlight and sunburn. Generally, the three-dimensional data can be moved and rotated in the desired angle in three-dimensional space; accordingly, the formula is extracted under the desired angle. Therefore, if the image is not at the desired angle, it can be rotated to calculate the image at the desired angle [18-20]. The need for three-dimensional imaging cameras, which are more expensive than two-dimensional imaging cameras, is one of the disadvantages of this method. Facial hair is one of the cases which makes the extraction of depth data difficult [20]. Due to the detection of faces by the trinity of eyes and nose, the system can get along with problems such as hiding the ear, hair, eyebrows, etc. (Figure 2). Figure 2. Face detection using nose and eyes The complete procedure can be extracted through a panoramic camera or multiple photos in different directions.
  • 5.  ISSN: 2252-8814 Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332 330 2.5. Experimental testing The proposed method implemented on the Bosphorus database on 150 faces shows that this method is 99.7% accurate in the diagnosis of the position of the nose, 99.3% correct diagnosis of the position of the eyes, and 96.7% accurate diagnosis of eye position. In Table 1, the results determine the location of the eyes, nose, and internal parts of the eyes, which were shown in the database. Table 1. The number of nose and eyes positions which were detected Viewing Angle Nose Detection Left Eye Detection Right Eye Detection Front Photo 143 141 139 Rotate 25 degrees around Y 143 143 139 Low rotation around the z-axis 143 143 141 High rotation around the z-axis 143 143 141 Smile Picture 143 142 140 Open Mouth 141 141 140 The rotation around the x-axis (above) 143 143 141 The rotation around the x-axis (down) 142 142 3. RESULTS AND DISCUSSION The proposed algorithm is implemented on 14 pictures of 150 persons is shown in Table 2. In Figure 3 the accuracy graph is plotted for different errors. Table 2. Estimation accuracy of angle for various errors The error z-Axis y-Axis x-Axis 10 ° error. 21 98% 99.52% 98.81% 6 ° Error 94.88% 98.81% 92.33% 3 ° Error 77.46% 95.00% 67.24% Figure 3. Accuracy diagram for different errors The running time of this Algorithm, regardless of the time needed to determine the location of the nose, is less than 800 milliseconds. Moreover, this method requires no data for training. Because of the hierarchical structure in case of the wrong estimation of the angles, the wrong estimation of the other angles is also possible. Due to the data mapping of the Z = 0, the existence of an error in the calculation of
  • 6. Int J Adv Appl Sci ISSN: 2252-8814  Powerful processing to three-dimensional facial recognition using … (Mohammad Karimi Moridani) 331 the angle in the direction of the z-axis is possible. If we can calculate parameters using the ellipse in three- dimensional space, then the accuracy can be increased. Also, if the angle along the x-axis and y can be determined from the original image along with the angle determination, then the Speed and error of the method will be decreased. One of the most important capabilities of the human visual system is the ability to recognize the face, which plays an important role in the social life of each person. Our relationship with other people is based on our ability to identify them. One of the most prominent aspects of the human face recognition system is its high accuracy and reliability, but all human beings do face recognition easily and without much effort, while this is not an easy task for computers. The main purpose of this study is to study the effect of selecting the appropriate features of three-dimensional images for face recognition. The results show the success of this method in identifying faces with different characteristics. Different approaches so far are summarized in Table. 3. Table 3. Comparison of classification accuracy of different methods Name 3D face database Maximum Accuracy Blanz et al [21] FRGC 92% Dibeklioglu et al [22] Bosphorus 79.41% Mahmood et al [23] GavabDB 90% Berretti et al [24] UND FRGC v2.0 GavabDB 82.1% Perakis et al [25] UND – Ding et al [26] LFW 92.95% Proposed method Bosphorus 99.52% Due to the sensitive nature of face detection, high precision is of the utmost importance. Obtained results show that the open mouth increases the possibility of the loss of the nose. Considering that this method is not only to identify the nose and eyes identification are also considered, so this weakness is covered, but in the situation in which both eyes are covered or somehow that part of the image is lost or if the mouth is open; the identification will be less than when the mouth is closed. 4. CONCLUSION Human face recognition by machine through 3D images has become an active research phase in image processing communities, pattern recognition, neural networks, and computer vision. The face plays an essential role in identifying people and showing their feelings at the community level. The human ability to recognize faces is remarkable. We can recognize thousands of memorable faces in our lifetime and, at a glance, recognize familiar faces even after years of separation. Face recognition has become an important issue in applications such as security systems, credit card control and crime identification. In this article, a new method to face recognition automatically using 3D face images was presented. The geometric features of the face were used to identify faces with the help of neural networks. The results showed. The results of using the proposed method in this paper with the help of Bosphorus database showed the highest percentage of accuracy compared to previous methods. ACKNOWLEDGMENT I would like to express my deepest appreciation to Professor Bulent Esnikor from Boğaziçi University for providing me with significant data that greatly assisted the research. REFERENCES [1] Xu C., Tan T., and Wang Y., Quan L., “Combining local features for robust nose location in 3D facial data,” Pattern Recognition Letters, vol. 27, no. 13, pp. 62-73, 2006. [2] Dibeklioglu H., Part-based 3D face recognition under pose and expression variations, Master's thesis, Bo_gazi_ci University, 2008. [3] Dibeklioglu H., Salah A., Akarun L., “3D facial landmarking under expression, pose, and occlusion variations,” In Proc. 2nd IEEE International Conference on Biometrics: Theory, Applications and Systems, pp. 1- 6, 2008. [4] Romero-Huertas M., Pears N., “3D facial landmark localization by matching simple descriptors,” In Proc. 2nd IEEE International Conference on Biometrics: Theory, Applications and Systems, 2008.
  • 7.  ISSN: 2252-8814 Int J Adv Appl Sci, Vol. 9, No. 4, December 2020: 326 – 332 332 [5] Perakis P., Passalis G., Theoharis T., Toderici G., Kakadiaris I., “Partial matching of interpose 3D facial data for face recognition,” In Proc. 3rd IEEE International Conference on Biometrics:Theory, Applications and Systems, pp. 439-446, 2009. [6] Perakis P., Theoharis T., Passalis G., Kakadiaris I., “Automatic 3D facial region retrieval from multi-pose facial datasets, In Proc. Eurographics Workshop on 3D Object Retrieval, pp. 37-44, 2009. [7] Xie W, Shen L, Yang M, Lai Z., “Active AU based patch weighting for facial expression recognition,” Sensors (Basel), vol. 17,pp. 275, 2017. [8] S. Berretti, A. Del Bimbo, P. Pala, “Sparse matching of salient facial curves for recognition of 3D faces with missing parts,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 2, pp.374-389, 2013. [9] Andrew J.Logan, Gael E.Gordon, GunterLoffler, “Contributions of individual face features to face discrimination,” Vision Research, vol. 137, pp. 29-39, 2017. [10] S. Ravi, S. Wilson, “Face detection with facial features and gender classification based on support vector machine,” 2010 IEEE International Conference on Computational Intelligence and computing Research, ISBN: 97881 8371 3627, 2010 [11] T. Verma and R. K. Sahu, “PCA-LDA based face recognition system & results comparison by various classification techniques,” 2013 International Conference on Green High Performance Computing (ICGHPC), Nagercoil, pp. 1-7, 2013. [12] Zhou, S., Xiao, S. “3D face recognition: a survey.” Hum. Cent. Comput. Inf. Sci. vol. 8, pp. 35, 2018. [13] Borade S.N., Deshmukh R.R., Shrishrimal P., “Effect of distance measures on the performance of face recognition using principal component analysis,” In: Berretti S., Thampi S., Srivastava P. (eds) “Intelligent systems technologies and applications,” Advances in Intelligent Systems and Computing, vol. 384, 2016. [14] S.M.S. Islam, R. Davies, M. Bennamoun, R. "Owens and Ajmal Mian, “Multibiometric human recognition using 3D ear and face features,” Pattern Recognition, vol. 46, no. 3, pp. 613-627, 2013. [15] X.Zhao, W.Zhang, G.Evangelopoulos, D.Huang, S.Shah, Y.Wang, I.Kakadiaris and L. Chen, “Benchmarking asymmetric 3D-2D face recognition systems,” Proc. 3D Face Biometrics Workshop, in conjuction with IEEE Int’l Conf. on Automatic Face and Gesture Recognition (FG), 2013. [16] Yueming Wang, Gang Pan, Jianzhuan Liu, “A deformation model to reduce the effect of expressions in 3D face recognition,” Visual Computer, vol. 27, no. 5, pp. 333-345, 2011. [17] A. Watt, M. Watt, Advanced Animation, and Rendering Techniques: Theory and Practice, Addison-Wesley, Reading, MA, 1992. [18] A. Moreno, A. Sanchez, J. Velez, F. Diaz, “Face recognition using 3D surface extracted descriptors,” Proceedings of the Irish Machine Vision and Image Processing, 2004. [19] Naeem Iqbal Ratyal, Imtiaz Ahmad Taj, Muhammad Sajid, Nouman Ali, Anzar Mahmood and Sohail Razzaq, “Three-dimensional face recognition using variance-based registration and subject-specific descriptors,” International Journal of Advanced Robotic Systems, pp. 1-16, 2019. [20] Miller, P., and Lyle, J. The effect of distance measures on the recognition rates of PCA and LDA based facial recognition Tech. rep., Clemson University, 2008. [21] Blanz V, Scherbaum K, Seidel HP, “Fitting a morphable model to 3D scans of faces,” In: Computer vision, pp. 1-8, 2007. [22] Dibeklioglu H, Salah AA, Akarun L., “3D facial landmarking under expression, pose and occlusion variations,” In: Biometrics theory, applications and systems, pp. 1-6, 2008. [23] Mahmood SA, Ghani RF, Kerim AA., “3D face recognition using pose invariant nose region detector,” In: Computer science and electronic engineering conference, 2014. [24] Berretti S, Del Bimbo PPA., “Sparse matching of salient facial curves for recognition of 3D faces with missing parts,” Forensics Secu, vol. 8, pp. 374-389, 2013. [25] Perakis P, Passalis G, Theoharis T, Toderici G, Kakadiaris IA., “Partial matching of interpose 3D facial data for face recognition,” In: Biometrics: theory, applications, and systems, pp. 439-446, 2009. [26] Hua WG., “Implicit elastic matching with random projections for pose-variant face recognition,” In: Comput. Vis. Pattern Recognit, pp. 1502-1509, 2009.