To quantify the acoustic properties of these 500,000 or so songs, the researchers relied on crowd-sourced data that classified songs according to 12 acoustic properties, including whether they are sung by a man or woman, are happy or sad, and are acoustic or electronic, among other attributes. Songs are also categorised according to their mood and genre, such as hip-hop, blues, country and house music.
Less than 4 per cent of songs in the entire sample found their way onto the Top 100 Singles Chart. To see what set these songs apart, they employed a machine learning method known as the “random forest” algorithm to crunch through all the data.
Sure enough, some noteworthy patterns emerged.
“Successful songs are happier, brighter, more party-like, more danceable and less sad than most songs,” the team wrote.
That may sound like an obvious recipe for pop-music success. But it actually went against the dominant musical trends.
Over the decades, songs exhibited “a clear downward trend in ‘happiness’ and ‘brightness,’ as well as a slight upward trend in ‘sadness,'” the study authors reported. “The public seems to prefer happier songs, even though more and more unhappy songs are being released each year.”
That observation matched up with previous studies of song lyrics that found they contained fewer “positive emotions” and made more references to loneliness and social isolation as the years went by.
“It is interesting that, in this particular instance, acoustic characteristics of songs indicate similar patterns to those uncovered in lyrics,” the researchers wrote.
In addition, the successful songs in a given year tended to be less “male” than other songs released at the same time, according to the study.
To test the strength of their algorithm, they asked it to assess 1052 songs that were released in 2014 and predict which of them charted and which of them were also-rans.
When the algorithm used song data from 2009 to 2013 as a guide, it was able to make the correct guess 75 per cent of the time, the researchers reported.
The accuracy got even better when the researchers included a non-acoustic variable – the “superstar” factor.
An artist was deemed a superstar if he or she had a song on the charts in the previous five years. In a given year, somewhere between half and two thirds of successful songs were from superstar artists. That compared with no more than 2 per cent of songs in the larger pool that didn’t make the charts.
When the purely acoustic data were combined with the superstar data, the algorithm correctly identified successful songs 85 per cent of the time. The accuracy improved to 86 per cent when the algorithm trained itself on songs going back to 2004.
Even with this success rate, there are still limits on what computers can do, the team cautioned.
“We can see that, in general, successful songs are, for example, ‘happier,’ more ‘party-like,’ less ‘relaxed’ and more ‘female’ than most,” they concluded. But “this does not necessarily allow us to naively predict that a particular ‘happy, party-like, not relaxed’ song sung by a female is going to succeed.”
Morning & Afternoon Newsletter