top of page
작성자 사진은지 오

SMPC 2024

The results of Empathic Accuracy (EA) comparison across three modalities (audio-only, video-and-audio, and video-only) in music and social situations are presented using a violin plot. Statistical analysis was conducted using repeated measures ANOVA, and significant differences (p < 0.05) are indicated on the plot. Music and social situations are separately visualized to highlight context-specific differences.

Oh, E.J., and Lee, K.M.* (2024, July). Intermodal Analysis of Emotion Inference: Examining Shared Processes in Music and Social Contexts. Conference of Society for Music Perception and Cognition (SMPC), Banff, AB.


Abstract

The intricate relationship between music and emotional experiences operates at both individual and societal levels. Despite the pivotal role of emotional interpretations in these experiences, there is limited research exploring the connection between emotional decoding in music and social contexts. This study addresses this gap through two primary objectives: 1) examining shared processes of emotion inference in music and social-emotional contexts, and 2) comparing modality effects in these domains. In this experiment, the Empathic Accuracy paradigm engaged 36 participants in real-time emotion inference on a 9-point scale (1: ‘very negative’ - 9: ‘very positive’) while watching a video. The video comprised 18 piano performances and 18 personal autobiographical stories, with pianists and speakers portraying joy/happiness, sadness, and anger through improvisational music and spoken narratives. Stimuli were presented in three modalities (visual-only, audio-only, and video-and-audio), with counterbalanced orders. Each participant’s accuracy was measured by comparing inferred emotions to correct answers (pianists’ self-reported emotions for piano performances and speakers’ self-reported emotions for autobiographical stories) using linear mixed-effect models. The data analysis was followed by the separation of data into positive (joy/happiness) and negative (sadness and anger) valence. Pearson correlations were conducted to examine the relationship between accuracy across different emotional contexts (RQ1), while a three-way repeated measures ANOVA (context x valence x modality) was employed to explore the modality effects (RQ2). Results indicated a positive correlation (r = 0.38, p = 0.023) between accuracy in decoding negative emotions in social situations and negatively valenced music, suggesting a shared ability to interpret negative emotions across contexts. The second analysis showed higher accuracy in social situations than in music, except in the visual-only condition (F(2, 70) = 32.59, p < 0.001). This study implies a shared emotion decoding process between music and social contexts, potentially leading to the transfer effects of musical experiences to social-emotional abilities. Additionally, the results underscore the possibility of auditory superiority over visual cues, emphasizing the significance of audio in both musical and social contexts.

조회수 1회댓글 0개

최근 게시물

전체 보기

ISMIR 2024

KSMPC 2024

Comments


bottom of page