Image and video based emotion recognition using deep learning /

Emotion recognition utilizing pictures, videos, or speech as input is considered an intriguing issue in the research field over certain years. The introduction of deep learning procedures like the Convolutional Neural Networks (CNN) has made emotion recognition achieve promising outcomes. Since hum...

Full description

Saved in:
Bibliographic Details
Main Author: Arselan Ashraf (Author)
Format: Thesis
Language:English
Published: Kuala Lumpur : Kulliyah of Engineering,International Islamic University Malaysia, 2021
Subjects:
Online Access:http://studentrepo.iium.edu.my/handle/123456789/10766
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emotion recognition utilizing pictures, videos, or speech as input is considered an intriguing issue in the research field over certain years. The introduction of deep learning procedures like the Convolutional Neural Networks (CNN) has made emotion recognition achieve promising outcomes. Since human facial appearances are considered vital in understanding one's feelings, many research studies have been carried out in this field. However, it still lacks in developing a visual-based emotion recognition model with good accuracy and uncertainty in determining influencing features, type, the number of emotions under consideration, and algorithms. This research is carried out to develop an image and video-based emotion recognition model using CNN for automatic feature extraction and classification. The optimum CNN configuration was found to be having three convolutional layers with max-pooling attached to each layer. The third convolutional layer was followed by a batch normalization layer connected with two fully connected layers. This CNN configuration was selected because it minimized the risk of overfitting along with produced a normalized output. Five emotions are considered for recognition: angry, happy, neutral, sad, and surprised, to compare with previous algorithms. The construction of the emotion recognition model is carried out on two datasets: an image dataset, namely “Warsaw Set of Emotional Facial Expression Pictures (WSEFEP)” and a video dataset, namely “Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV).” Different pre-processing steps have been carried over data samples, followed by the popular and efficient Viola-Jones algorithm for face detection. CNN has been used for feature extraction and classification. Evaluating results using confusion matrix, accuracy, F1-score, precision, and recall shows that video-based datasets obtained more promising results than image-based datasets. The recognition accuracy, F1 score, precision, and recall for the video dataset came out to be 99.38%, 99.22%, 99.4%, 99.38, and that of the image dataset came out to be 83.33%, 79.1%, 84.46%, 80%, respectively. The proposed algorithm has been benchmarked with two other CNN-based algorithms, and the accuracy performs better around 5.33% and 3.33%, respectively, for the image dataset, while 4.38% for the video dataset. The outcome of this research provides the productivity and usability of the proposed system in visual-based emotion recognition.
Item Description:Abstracts in English and Arabic.
"A dissertation submitted in fulfilment of the requirement for the degree of Master of Science (Computer and Information Engineering)." --On title page.
Physical Description:xvi, 108 leaves : colour illustrations ; 30cm.
Bibliography:Includes bibliographical references (leaves 95-101).