An efficient neural network can now take a series of music files as input and define them quickly by genre and style, thanks to work published in the International Journal of Web Services. Such a system could be a boon to music streaming services that hope to offer their users an effective recommendation system to allow them to access novel music they may enjoy as much as their old favourites.
Many millions of people listen to music through online streaming or download services on their computers, smart devices and mobile phones rather than selecting a plastic disc from a collection to be played on a dedicated machine. As such, there are many aspects of the enjoyment and recommendation of new music that can utilise the vast repositories of information found online as well as the connectivity of online communities. However, for a system to be able to automate recommendations to users, there is an inherent need for each piece of music to be appropriately tagged with respect to genre, style, tempo, and other such characteristics.
Jagendra Singh of the School of Computer Science Engineering and Technology at Bennett University in Greater Noida, India, has tested the system against six types of music, including jazz, hip-hop, electronic, rock, classical, and folk and found it to be effective. The algorithm performs even better when the spectrographic frequency of the sounds and the time sequence pattern are incorporated as variables into their hybrid recommendation system.
While it is inevitable that word-of-mouth recommendations among music fans will persist, the diversity and density of music now available to so many people online means that music can reach new audiences, more quickly. Moreover, the desires of music fans keen to seek out novelty quickly without waiting for a friend or contact to discover the next greatest hit for them could be served well by algorithmic recommendation systems.
Singh, J. (2022) ‘An efficient deep neural network model for music classification’, Int. J. Web Science, Vol. 3, No. 3, pp.236–248.