Robert Hotz reports in the Wall Street Journal:
Experts are wary of Big Data, algorithm-driven approaches because of the unconscious biases surrounding music perception that creep into code. (But) songs from cultures around the world—from love ballads to lullabies—exhibit universal patterns across hundreds of societies that hints at the psychology behind music’s appeal. No matter where they are from, similar-sounding songs are associated with the same social activities, such as infant care, healing, love and dance. Research suggests a fundamental property underlying music’s hold on the mind, at a time when people streamed 611 billion songs on demand last year
Songs from cultures around the world—from love ballads to lullabies—exhibit some universal patterns, according to a new study of music makers across hundreds of societies that hints at the psychology behind music’s broad appeal.
No matter where they are from, similar-sounding songs are associated with the same social activities, such as infant care, healing, love and dance, the study found. The findings were published Thursday in the journal Science.
The new research suggests a fundamental property underlying music’s uncanny hold on the mind, several evolutionary biologists and musicologists said—at a time when people listen to more music than ever, streaming some 611 billion songs on demand last year, according to Nielsen data.
“That is something that has been strongly debated,” said evolutionary biologist Tecumseh Fitch at the University of Vienna in Austria, who studies human speech, music and language and wasn’t involved in the project.
“This is a scientific finding that will be surprising to many experts,” he said. “It hopefully will open a new era in our understanding of the world’s music.” From Tuvan throat-singing and K-pop stylings to the pentatonic Berber songs of North Africa, and the 22-tone classical ragas of India to America’s 12-bar blues, songs all appear to share a basic sense of tonal structure.
“Music is something that has bedeviled anthropologists and biologists since Darwin,” said anthropologist Luke Glowacki at Pennsylvania State University and a senior author of the study. “If there were no underlying principles of the human mind, there would not be these regularities.”
Conducting an ambitious experiment in the digital humanities, data scientists and anthropologists combed through a century’s worth of field recordings and reports to identify shared features in the playlists of humankind.
To eliminate the biases of culture, sexism, music perception and Western scholarship, the researchers used a machine learning algorithm to sift musical data for significant patterns, cross-checked by musicologists and more than 30,000 listeners recruited through online crowdsourcing.
Many experts who study the music of other cultures are wary of such Big Data, algorithm-driven approaches because of the unconscious biases surrounding music perception that can creep into code.
“This could open the way to something bigger,” said ethnomusicologist Elizabeth Tolbert at Johns Hopkins University in Baltimore, who wasn’t involved in the study. “But they may be over-interpreting their results” by going too far in trying to find common patterns amid so much variation in music.
It took the researchers involved in the study four years to create music databases that they said addressed the challenges of cultural bias and data reliability. (They posted some of their work online at Harvard University’s Music Laboratory, in a series of tests people can take to study how the human mind creates and perceives music.)
To start, they drew on ethnographic reports documenting 315 cultures around the world that had been collected at Yale University. For more precise analysis, they then focused on music behavior in 60 especially well-documented societies, coding mentions of singing for 60 variables ranging from the age and sex of the singer to the time of day the song was performed and whether the singer was in costume.
They also digitized field recordings to create a matching database of music and had those songs transcribed into Western musical notation.“We wanted to build data sets that properly sample human cultural and geographic diversity,” said Manvir Singh, a cognitive and evolutionary anthropologist at Harvard University, who was a co-author of the study. “But there is no standard way to represent music across cultures.”
In sifting the data, the scientists asked music experts and people with no musical expertise to listen to snippets of songs and then categorize each as a lullaby, a dance, a healing song or a love song based on acoustic features such as tempo, rhythm and pitch. To cross-check the results, they also used a sorting algorithm to match music to behavior.
“There seems to be a link between the forms of a piece of singing and behavior,” said Harvard music psychologist Samuel Mehr, who led the research team. “We find that people are very accurate. We find that the machine can tell, too.”
0 comments:
Post a Comment