Kirbiz, Serap2025-05-052025-05-0520250016-00321879-2693https://doi.org/10.1016/j.jfranklin.2025.107659https://hdl.handle.net/20.500.11779/2570In this paper, a deep learning framework is proposed for automatic facial emotion based on deep convolutional networks. In order to increase the generalization ability and the robustness of the method, the dataset size is increased by merging three publicly available facial emotion datasets: CK+, FER+ and KDEF. Despite the increase in dataset size, the minority classes still suffer from insufficient number of training samples, leading to data imbalance. The data imbalance problem is minimized by online and offline augmentation techniques and random weighted sampling. Experimental results demonstrate that the proposed method can recognize the seven basic emotions with 82% accuracy. The results demonstrate the effectiveness of the proposed approach in tackling the challenges of data imbalance and improving classification performance in facial emotion recognition.eninfo:eu-repo/semantics/closedAccessFacial Emotion RecognitionConvolutional Neural NetworksFace AlignmentData AugmentationFacial LandmarksRandom Weighted SamplingImproving Facial Emotion Recognition Through Dataset Merging and Balanced Training StrategiesArticle10.1016/j.jfranklin.2025.107659Q1Q17362WOS:001458635300001