Facial Expression Based Emotion Recognition
Authors (s)
(1) * Muhammad Ibrahim 


        Turkey
(2)  Burhan Ergen 

        Turkey
(*) Corresponding Author
AbstractHuman communication predominantly relies on spoken and written language; however, nonverbal cues, such as facial expressions, play a critical role in conveying emotions. This study details the development and evaluation of a deep learning model for Facial Emotion Recognition (FER) utilizing the VGG-16 architecture and the FER2013 dataset which includes over 35,000 facial images taken in natural settings, depicting seven emotions. The objective was to enhance recognition, accuracy and performance beyond the existing benchmarks in the literature. Transfer learning was employed by leveraging pre-trained VGG-16 weights, with the classification layers restructured and fine-tuned for emotion categorization. Comprehensive preprocessing, including normalization and data augmentation, was implemented to improve the model generalization and mitigate overfitting. The final model achieved an accuracy of 85.77%, surpassing several previous VGG-16-based FER models. The model performance was assessed using metrics such as accuracy, precision, recall, and F1-score, confirming the model's reliability. Integral to this success was the incorporation of hyperparameter tuning and regularization techniques, notably, dropout and early stopping. The model demonstrated the capability to extract salient features from low-resolution images, thereby supporting its robustness. Additionally,the potential use cases of the model in areas such as transportation safety, security systems, and customer interaction analysis can address in the Future study to enhance the model's real-world applicability by utilizing more diverse datasets and advanced architectures |
Keywords
Full Text: PDF
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Muhammad Ibrahim, Burhan Ergen
