HARMONIZING EMOTION: A MULTIMODAL APPROACH TO ANALYZING HUMAN AFFECT IN MUSIC RECOMMENDATION SYSTEMS
Abstract
Music is a worldwide language that everyone throughout the world enjoys. Zatorre and Peretz (2001) state that, musical undertakings with their unique essence appear to have been a part of every recorded society on Earth, dating back at least 250,000 years [1]. As the digital age advances, Customized music suggestion systems are now deeply ingrained in our everyday routines, providing us with a curate’s selection of songs that match our preferencesMudit Kumar Tyagi et.al [2]suggested a method for extracting user preferences based on their music listening history.Incorporating demographic information such as age and gender provides a more nuanced understanding of a listener's identity. People of different ages and genders may have unique musical preferences, and these attributes can act as significant filters in the recommendation process. For example, a teenager's taste in music is likely to differ from that of a middle-aged adult. Similarly, gender can play a role in shaping musical choices. Integrating age and gender detection into music recommendation systems ensures that the music offered is not only personally relevant but also age-appropriate and respectful of gender sensitivities. This research proposes a multimodal approach, combining demographic human features and emotional signals, to refine and personalize music selection through advanced machine learning techniques.