A Heterogeneous Multimodal Graph Learning Framework for Recognizing User Emotions in Social Networks
Sree Bhattacharyya, Shuhua Yang, James Z. Wang
The Pennsylvania State University, USA
Abstract:
The rapid expansion of social media platforms
has provided unprecedented access to massive amounts of multimodal
user-generated content. Comprehending user emotions can provide
valuable insights for improving communication and understanding of
human behaviors. Despite significant advancements in Affective
Computing, the diverse factors influencing user emotions in social
networks remain relatively understudied. Moreover, there is a notable
lack of deep learning-based methods for predicting user emotions in
social networks, which could be addressed by leveraging the extensive
multimodal data available. This work presents a novel formulation of
personalized emotion prediction in social networks based on
heterogeneous graph learning. Building upon this formulation, we
design HMG-Emo, a Heterogeneous Multimodal Graph Learning Framework
that utilizes deep learning-based features for user emotion
recognition. Additionally, we include a dynamic context fusion module
in HMG-Emo that is capable of adaptively integrating the different
modalities in social media data. Through extensive experiments, we
demonstrate the effectiveness of HMG-Emo and verify the superiority of
adopting a graph neural network-based approach, which outperforms
existing baselines that use rich hand-crafted features. To the best of
our knowledge, HMG-Emo is the first multimodal and deep-learning-based
approach to predict personalized emotions within online social
networks. Our work highlights the significance of exploiting advanced
deep learning techniques for less-explored problems in Affective
Computing.
Full Paper (including Appendix)
(PDF, 1.3MB)
More information
Citation:
Sree Bhattacharyya, Shuhua Yang and James Z. Wang, ``A Heterogeneous
Multimodal Graph Learning Framework for Recognizing User Emotions in
Social Networks,'' Proceedings of the International Conference on
Affective Computing and Intelligent Interaction, pp. -, Glasgow, U.K.,
September 2024.
© 2024 AAAC. Personal use of this material is permitted. However,
permission to reprint/republish this material for advertising or
promotional purposes or for creating new collective works for resale
or redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works must be obtained from the AAAC.
Last Modified:
August 20, 2024
© 2024