In this project, we collect the different methodologies that which process and classify speech signals and detect the emotions from the speech signals as emotions play vital role in communications. Here, the emotions are detected from a person or through a speaker.
In human machine interface application, emotion recognition from the speech signal has been research topic since many years. Emotions play an extremely important role in human mental life. It is a medium of expression of one's perspective or one's mental state to others. Speech Emotion Recognition (SER) can be defined as extraction of the emotional state of the speaker from his or her speech signal.
There are few universal emotions- including Neutral, Anger, Happiness, Sadness etc in which any intelligent system with finite computational resources can be trained to identify or synthesize as required. In this work, we are extracting Mel-frequency cepstral coefficients (MFCC), Chromogram, Mel scaled spectrogram in conjunction with Spectral contrast and Tonal Centroid features. Deep Neural Network is used to classify the emotion in this work.
Keywords: Emotions, Mel-Frequency Cepstral Coefficients, Chromogram, Mel Scaled Spectrogram, Spectral Contrast, Tonal Centroid, Deep Neural Network.
NOTE: Without the concern of our team, please don't submit to the college. This Abstract varies based on student requirements.
HARDWARE SPECIFICATIONS:
SOFTWARE SPECIFICATIONS: