We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Improved self‐attentive Musical Instrument Digital Interface content‐based music recommendation system.
- Authors
Yadav, Naina; Kumar Singh, Anil; Pal, Sukomal
- Abstract
Automatic music recommendation is an open research problem that has seen much work in recent years. A common and successful music recommendation approach is collaborative filtering, which has worked well in this domain. One major drawback of this method is that it suffers from a cold‐start problem, and it requires a lot of user‐personalized information. It is an ineffective mechanism for recommending new and unpopular songs as well as for new users. In this article, we report a hybrid methodology that uses the song's content information. We use MIDI (Musical Instrument Digital Interface) content data, a compressed version of an audio song that contains digital information about a song and is machine‐readable. We describe a model called MSA‐SRec (MIDI Based Self Attentive Sequential Music Recommendation), a latent factor‐based self‐attentive deep learning model that uses a substantial amount of sequential information as content information of the song for recommendation generation. We use MIDI data of a song that is under‐explored content information for music recommendation. We show that using MIDI as content data with user and item latent vector produces reasonable recommendations. We also demonstrate that using MIDI over other music metadata performs better with various state‐of‐the‐art models of recommendation systems.
- Subjects
MIDI (Standard); RECOMMENDER systems; DEEP learning
- Publication
Computational Intelligence, 2022, Vol 38, Issue 4, p1232
- ISSN
0824-7935
- Publication type
Article
- DOI
10.1111/coin.12501