SlideShare ist ein Scribd-Unternehmen logo
1 von 21
Facial Emotion Recognition with
Facial Gestures
By Ashwin Kedari Rachha
3263
Why Emotion Detection?
● The motivation behind choosing this topic specifically lies in the huge
investments large corporations do in feedbacks and surveys but fail to get
equitable response on their investments.
● Emotion Detection through facial gestures is a technology that aims to improve
product and services performance by monitoring customer behavior to certain
products or service staff by their evaluation
Some Companies that make us of emotion detection...
● While Disney uses emotion-detection tech to find out opinion on a completed
project, other brands have used it to directly inform advertising and digital
marketing.
● Kellogg’s is just one high-profile example, having used Affectiva’s software to test
audience reaction to ads for its cereal.
● Unilever does this, using HireVue’s AI-powered technology to screen prospective
candidates based on factors like body language and mood. In doing so, the
company is able to find the person whose personality and characteristics are best
suited to the job.
Emotion Expression Recognition Using SVM
What are Support Vector Machines?
SVMs plot the training vectors in high-dimensional
feature space, and label each vector with its class. A
hyperplane is drawn between the training vectors
that maximizes the distance between the different
classes. The hyperplane is determined through a
kernel function (radial basis, linear, polynomial or
sigmoid), which is given as input to the classification
software. [1999][Joachims, 1998b].
Facial Expression Detection using Fuzzy Classifier
● The algorithm is composed of three main
stages: image processing stage and facial
feature extraction stage, and emotion
detection stage.
● In image processing stage, the face region
and facial component is extracted by using
fuzzy color filter, virtual face model, and
histogram analysis method.
● The features for emotion detection are
extracted from facial component in facial
feature extraction stage.
FACIAL
EXPRESSION
RECOGNITION:
A DEEP
LEARNING
APPROACH
Facial Emotion Recognition by CNN
➽ Steps:
1. Data Preprocessing
2. Image Augmentation
3. Feature Extraction
4. Training
5. Validation
Dataset Description
The data consists of 48x48 pixel grayscale images of faces. The faces have been
categorized into facial expression in to one of seven categories (0=Angry, 1=Disgust,
2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).
The training set consists of 28,709 examples. The public test set used for the
leaderboard consists of 3,589 examples. The final test set, which was used to determine
the winner of the competition, consists of another 3,589 examples.This dataset was
prepared by Pierre-Luc Carrier and Aaron Courville, as part of an ongoing research
project.
Data Preprocessing
● The fer2013.csv consists of three columns namely emotion, pixels and purpose.
● The column in pixel first of all is stored in a list format.
● Since computational complexity is high for computing pixel values in the range of
(0-255), the data in pixel field is normalized to values between [0-1].
● The face objects stored are reshaped and resized to the mentioned size of 48 X 48.
● The respective emotion labels and their respective pixel values are stored in
objects.
● We use scikit-learn’s train_test_split() function to split the dataset into training
and testing data. The test_size being 0.2 meaning, 20% of data is for validation
while 80% of the data will be trained.
More data is generated using the training set by applying
transformations. It is required if the training set is not sufficient
enough to learn representation. The image data is generated by
transforming the actual training images by rotation, crop, shifts,
shear, zoom, flip, reflection, normalization etc.
Data Augmentation
Mini-Exception Model for Emotion Recognition
The architecture of the Mini_Exception
model consists of:
● Two convolutional layers followed
by batch normalization
● Then one batch is again treated
with a convolutional layer and other
batch is treated by seperable-
convolutional layer.
● The layers are followed my max
pooling and global average pooling
finalized by the softmax algorithm.
Convolutional 2D
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which
is simply a small matrix of weights. This kernel “slides” over the 2D input data,
performing an elementwise multiplication with the part of the input it is currently on,
and then summing up the results into a single output pixel.
Batch Normalization and Max Pooling 2D
Batch normalization reduces the amount by what the hidden unit values shift around
(covariance shift).
In case of Max Pooling, a kernel of size n*n is moved across the matrix and for each
position the max value is taken and put in the corresponding position of the output
matrix.
Global Average Pooling
(GAP) layers to minimize overfitting by reducing the total number of parameters in
the model. Similar to max pooling layers, GAP layers are used to reduce the spatial
dimensions of a three-dimensional tensor.
Optimizer, Loss function and Metrics.
● Loss function used is categorical crossentropy.
● Loss function simply measures the absolute difference between our prediction
and the actual value.
● Cross-entropy loss, or log loss, measures the performance of a classification model
whose output is a probability value between 0 and 1. Cross-entropy loss increases
as the predicted probability diverges from the actual label. So predicting a
probability of .012 when the actual observation label is 1 would be bad and result
in a high loss value.
● Optimizer used is the Adam() optimizer.
● Adam stands for Adaptive Moment Estimation. Adaptive
Moment Estimation (Adam) is another method that
computes adaptive learning rates for each parameter. In
addition to storing an exponentially decaying average of
past squared gradients like AdaDelta ,Adam also keeps
an exponentially decaying average of past gradients M(t),
similar to momentum:
Validation
● In the validation phase, various OpenCV functions and Keras functions have been
used.
● Initially the video frame is stored in a video object.
● An LBP cascade classifier is used to detect facial region of interest
● The image frame is converted into grayscale and resized and reshaped with the
help of numpy.
● This resized image is fed to the predictor which is loaded by
keras.models.load_model() function.
● The max argument is output.
● A rectangle is drawn around the facial regions and the output is formatted above
the rectangular box.
Performance Evaluation
The model gives 65-66% accuracy on validation set while training the model. The CNN
model learns the representation features of emotions from the training images. Below
are few epochs of training process with batch size of 64.
Output and statistics
Facial Emotion Recognition: A Deep Learning approach

Weitere ähnliche Inhalte

Was ist angesagt?

Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
Anukriti Dureha
 
Face recognition using neural network
Face recognition using neural networkFace recognition using neural network
Face recognition using neural network
Indira Nayak
 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
ijtsrd
 
Detection and recognition of face using neural network
Detection and recognition of face using neural networkDetection and recognition of face using neural network
Detection and recognition of face using neural network
Smriti Tikoo
 
Facial recognition technology by vaibhav
Facial recognition technology by vaibhavFacial recognition technology by vaibhav
Facial recognition technology by vaibhav
Vaibhav P
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
Santosh Kumar
 

Was ist angesagt? (20)

Emotion recognition
Emotion recognitionEmotion recognition
Emotion recognition
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
 
Face Detection
Face DetectionFace Detection
Face Detection
 
Facial emotion detection on babies' emotional face using Deep Learning.
Facial emotion detection on babies' emotional face using Deep Learning.Facial emotion detection on babies' emotional face using Deep Learning.
Facial emotion detection on babies' emotional face using Deep Learning.
 
Automated Face Detection System
Automated Face Detection SystemAutomated Face Detection System
Automated Face Detection System
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition
 
Face recognition using neural network
Face recognition using neural networkFace recognition using neural network
Face recognition using neural network
 
Computer Vision - Real Time Face Recognition using Open CV and Python
Computer Vision - Real Time Face Recognition using Open CV and PythonComputer Vision - Real Time Face Recognition using Open CV and Python
Computer Vision - Real Time Face Recognition using Open CV and Python
 
Face recognition technology
Face recognition technologyFace recognition technology
Face recognition technology
 
Face Detection and Recognition System
Face Detection and Recognition SystemFace Detection and Recognition System
Face Detection and Recognition System
 
Handwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPTHandwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPT
 
Deep learning
Deep learningDeep learning
Deep learning
 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
 
Detection and recognition of face using neural network
Detection and recognition of face using neural networkDetection and recognition of face using neural network
Detection and recognition of face using neural network
 
PAC Learning
PAC LearningPAC Learning
PAC Learning
 
Facial recognition technology by vaibhav
Facial recognition technology by vaibhavFacial recognition technology by vaibhav
Facial recognition technology by vaibhav
 
Region based segmentation
Region based segmentationRegion based segmentation
Region based segmentation
 
Facel expression recognition
Facel expression recognitionFacel expression recognition
Facel expression recognition
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
 
Face detection presentation slide
Face detection  presentation slideFace detection  presentation slide
Face detection presentation slide
 

Ähnlich wie Facial Emotion Recognition: A Deep Learning approach

Analysis of Classification Techniques based on SVM for Face Recognition
Analysis of Classification Techniques based on SVM for Face RecognitionAnalysis of Classification Techniques based on SVM for Face Recognition
Analysis of Classification Techniques based on SVM for Face Recognition
Editor Jacotech
 

Ähnlich wie Facial Emotion Recognition: A Deep Learning approach (20)

IRJET - Emotion Recognising System-Crowd Behavior Analysis
IRJET -  	  Emotion Recognising System-Crowd Behavior AnalysisIRJET -  	  Emotion Recognising System-Crowd Behavior Analysis
IRJET - Emotion Recognising System-Crowd Behavior Analysis
 
Sign Detection from Hearing Impaired
Sign Detection from Hearing ImpairedSign Detection from Hearing Impaired
Sign Detection from Hearing Impaired
 
IRJET- An Overview on Automated Emotion Recognition System
IRJET-  	  An Overview on Automated Emotion Recognition SystemIRJET-  	  An Overview on Automated Emotion Recognition System
IRJET- An Overview on Automated Emotion Recognition System
 
IRJET - Hand Gesture Recognition to Perform System Operations
IRJET -  	  Hand Gesture Recognition to Perform System OperationsIRJET -  	  Hand Gesture Recognition to Perform System Operations
IRJET - Hand Gesture Recognition to Perform System Operations
 
Methods of Optimization in Machine Learning
Methods of Optimization in Machine LearningMethods of Optimization in Machine Learning
Methods of Optimization in Machine Learning
 
IRJET - Skin Disease Predictor using Deep Learning
IRJET - Skin Disease Predictor using Deep LearningIRJET - Skin Disease Predictor using Deep Learning
IRJET - Skin Disease Predictor using Deep Learning
 
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGES
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGESA DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGES
A DEEP LEARNING APPROACH FOR SEMANTIC SEGMENTATION IN BRAIN TUMOR IMAGES
 
cvpresentation-190812154654 (1).pptx
cvpresentation-190812154654 (1).pptxcvpresentation-190812154654 (1).pptx
cvpresentation-190812154654 (1).pptx
 
ppt 20BET1024.pptx
ppt 20BET1024.pptxppt 20BET1024.pptx
ppt 20BET1024.pptx
 
Bangla Handwritten Digit Recognition Report.pdf
Bangla Handwritten Digit Recognition  Report.pdfBangla Handwritten Digit Recognition  Report.pdf
Bangla Handwritten Digit Recognition Report.pdf
 
Real-Time Face Tracking with GPU Acceleration
Real-Time Face Tracking with GPU AccelerationReal-Time Face Tracking with GPU Acceleration
Real-Time Face Tracking with GPU Acceleration
 
CUDA Accelerated Face Recognition
CUDA Accelerated Face RecognitionCUDA Accelerated Face Recognition
CUDA Accelerated Face Recognition
 
Blood Cell Image Classification for Detecting Malaria using CNN
Blood Cell Image Classification for Detecting Malaria using CNNBlood Cell Image Classification for Detecting Malaria using CNN
Blood Cell Image Classification for Detecting Malaria using CNN
 
IRJET- Facial Expression Recognition using GPA Analysis
IRJET-  	  Facial Expression Recognition using GPA AnalysisIRJET-  	  Facial Expression Recognition using GPA Analysis
IRJET- Facial Expression Recognition using GPA Analysis
 
IRJET- Mango Classification using Convolutional Neural Networks
IRJET- Mango Classification using Convolutional Neural NetworksIRJET- Mango Classification using Convolutional Neural Networks
IRJET- Mango Classification using Convolutional Neural Networks
 
IRJET- Survey on Face Recognition using Biometrics
IRJET-  	  Survey on Face Recognition using BiometricsIRJET-  	  Survey on Face Recognition using Biometrics
IRJET- Survey on Face Recognition using Biometrics
 
IRJET - Face Recognition based Attendance System
IRJET -  	  Face Recognition based Attendance SystemIRJET -  	  Face Recognition based Attendance System
IRJET - Face Recognition based Attendance System
 
Brain Tumor Classification using Support Vector Machine
Brain Tumor Classification using Support Vector MachineBrain Tumor Classification using Support Vector Machine
Brain Tumor Classification using Support Vector Machine
 
IRJET- Class Attendance using Face Detection and Recognition with OPENCV
IRJET- Class Attendance using Face Detection and Recognition with OPENCVIRJET- Class Attendance using Face Detection and Recognition with OPENCV
IRJET- Class Attendance using Face Detection and Recognition with OPENCV
 
Analysis of Classification Techniques based on SVM for Face Recognition
Analysis of Classification Techniques based on SVM for Face RecognitionAnalysis of Classification Techniques based on SVM for Face Recognition
Analysis of Classification Techniques based on SVM for Face Recognition
 

Kürzlich hochgeladen

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Kürzlich hochgeladen (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

Facial Emotion Recognition: A Deep Learning approach

  • 1. Facial Emotion Recognition with Facial Gestures By Ashwin Kedari Rachha 3263
  • 2. Why Emotion Detection? ● The motivation behind choosing this topic specifically lies in the huge investments large corporations do in feedbacks and surveys but fail to get equitable response on their investments. ● Emotion Detection through facial gestures is a technology that aims to improve product and services performance by monitoring customer behavior to certain products or service staff by their evaluation
  • 3. Some Companies that make us of emotion detection... ● While Disney uses emotion-detection tech to find out opinion on a completed project, other brands have used it to directly inform advertising and digital marketing. ● Kellogg’s is just one high-profile example, having used Affectiva’s software to test audience reaction to ads for its cereal. ● Unilever does this, using HireVue’s AI-powered technology to screen prospective candidates based on factors like body language and mood. In doing so, the company is able to find the person whose personality and characteristics are best suited to the job.
  • 4. Emotion Expression Recognition Using SVM What are Support Vector Machines? SVMs plot the training vectors in high-dimensional feature space, and label each vector with its class. A hyperplane is drawn between the training vectors that maximizes the distance between the different classes. The hyperplane is determined through a kernel function (radial basis, linear, polynomial or sigmoid), which is given as input to the classification software. [1999][Joachims, 1998b].
  • 5. Facial Expression Detection using Fuzzy Classifier ● The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. ● In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. ● The features for emotion detection are extracted from facial component in facial feature extraction stage.
  • 7. Facial Emotion Recognition by CNN ➽ Steps: 1. Data Preprocessing 2. Image Augmentation 3. Feature Extraction 4. Training 5. Validation
  • 8. Dataset Description The data consists of 48x48 pixel grayscale images of faces. The faces have been categorized into facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples. The public test set used for the leaderboard consists of 3,589 examples. The final test set, which was used to determine the winner of the competition, consists of another 3,589 examples.This dataset was prepared by Pierre-Luc Carrier and Aaron Courville, as part of an ongoing research project.
  • 9. Data Preprocessing ● The fer2013.csv consists of three columns namely emotion, pixels and purpose. ● The column in pixel first of all is stored in a list format. ● Since computational complexity is high for computing pixel values in the range of (0-255), the data in pixel field is normalized to values between [0-1]. ● The face objects stored are reshaped and resized to the mentioned size of 48 X 48. ● The respective emotion labels and their respective pixel values are stored in objects. ● We use scikit-learn’s train_test_split() function to split the dataset into training and testing data. The test_size being 0.2 meaning, 20% of data is for validation while 80% of the data will be trained.
  • 10.
  • 11. More data is generated using the training set by applying transformations. It is required if the training set is not sufficient enough to learn representation. The image data is generated by transforming the actual training images by rotation, crop, shifts, shear, zoom, flip, reflection, normalization etc. Data Augmentation
  • 12. Mini-Exception Model for Emotion Recognition The architecture of the Mini_Exception model consists of: ● Two convolutional layers followed by batch normalization ● Then one batch is again treated with a convolutional layer and other batch is treated by seperable- convolutional layer. ● The layers are followed my max pooling and global average pooling finalized by the softmax algorithm.
  • 13. Convolutional 2D The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
  • 14. Batch Normalization and Max Pooling 2D Batch normalization reduces the amount by what the hidden unit values shift around (covariance shift). In case of Max Pooling, a kernel of size n*n is moved across the matrix and for each position the max value is taken and put in the corresponding position of the output matrix.
  • 15. Global Average Pooling (GAP) layers to minimize overfitting by reducing the total number of parameters in the model. Similar to max pooling layers, GAP layers are used to reduce the spatial dimensions of a three-dimensional tensor.
  • 16. Optimizer, Loss function and Metrics. ● Loss function used is categorical crossentropy. ● Loss function simply measures the absolute difference between our prediction and the actual value. ● Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value.
  • 17. ● Optimizer used is the Adam() optimizer. ● Adam stands for Adaptive Moment Estimation. Adaptive Moment Estimation (Adam) is another method that computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients like AdaDelta ,Adam also keeps an exponentially decaying average of past gradients M(t), similar to momentum:
  • 18. Validation ● In the validation phase, various OpenCV functions and Keras functions have been used. ● Initially the video frame is stored in a video object. ● An LBP cascade classifier is used to detect facial region of interest ● The image frame is converted into grayscale and resized and reshaped with the help of numpy. ● This resized image is fed to the predictor which is loaded by keras.models.load_model() function. ● The max argument is output. ● A rectangle is drawn around the facial regions and the output is formatted above the rectangular box.
  • 19. Performance Evaluation The model gives 65-66% accuracy on validation set while training the model. The CNN model learns the representation features of emotions from the training images. Below are few epochs of training process with batch size of 64.