12

DeepEmoNet: Building Machine Learning Models for Automatic Emotion Recognition in Human Speeches

Main:5 Pages
5 Figures
Bibliography:1 Pages
1 Tables
Abstract

Speech emotion recognition (SER) has been a challenging problem in spoken language processing research, because it is unclear how human emotions are connected to various components of sounds such as pitch, loudness, and energy. This paper aims to tackle this problem using machine learning. Particularly, we built several machine learning models using SVMs, LTSMs, and CNNs to classify emotions in human speeches. In addition, by leveraging transfer learning and data augmentation, we efficiently trained our models to attain decent performances on a relatively small dataset. Our best model was a ResNet34 network, which achieved an accuracy of 66.7%66.7\% and an F1 score of 0.6310.631.

View on arXiv
Comments on this paper