Semantic Communication for VR Music Live Streaming With Rate Splitting
Published in IEEE Transactions on Computational Social Systems, 2024
Abstract: Virtual reality (VR) live streaming has established a remarkable transformation of music performances that facilitates a unique interaction between artists and their audiences within a virtual environment, offering an experience that significantly surpasses the conventional constraints of live music events. This article proposes a novel framework for enhancing VR music live streaming through the integration of semantic communication and rate splitting. The framework aims to improve user experience by efficiently transmitting music and speech components. It utilizes a semantic encoder to separately extract semantic information for music and speech, to capture the unique characteristics of music and speech. After having the extracted feature, we propose a rate-splitting-based algorithm in the transmission of music and speech to enhance user utility by designating music as a common message for all users and speech as a private message targeted to specific users based on their preferences. Simulation results demonstrate significant performance gain compared to the baseline methods.
Recommended citation: J. Zou, L. Xu and S. Sun, "Semantic Communication for VR Music Live Streaming With Rate Splitting," in IEEE Transactions on Computational Social Systems, vol. 12, no. 2, pp. 918-927, April 2025, doi: 10.1109/TCSS.2024.3443176.
Download Paper
