• Journal of Internet Computing and Services
    ISSN 2287 - 1136 (Online) / ISSN 1598 - 0170 (Print)
    https://jics.or.kr/

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation


Ji-Young Baek, Sera Kim, Seok-Pil Lee, Journal of Internet Computing and Services, Vol. 23, No. 2, pp. 71-77, Apr. 2022
10.7472/jksii.2022.23.2.71, Full Text:
Keywords: Speech Synthesis, Speech Emotion, database, Multi Modal

Abstract

In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[APA Style]
Baek, J., Kim, S., & Lee, S. (2022). Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation. Journal of Internet Computing and Services, 23(2), 71-77. DOI: 10.7472/jksii.2022.23.2.71.

[IEEE Style]
J. Baek, S. Kim, S. Lee, "Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation," Journal of Internet Computing and Services, vol. 23, no. 2, pp. 71-77, 2022. DOI: 10.7472/jksii.2022.23.2.71.

[ACM Style]
Ji-Young Baek, Sera Kim, and Seok-Pil Lee. 2022. Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation. Journal of Internet Computing and Services, 23, 2, (2022), 71-77. DOI: 10.7472/jksii.2022.23.2.71.