Xiaoxiang refers to the region in China’s Hunan Province where the rivers Xiao and Xiang intersect. It is also the title of a concerto for alto saxophone and orchestra by UC San Diego music professor and Qualcomm Institute composer in residence Lei Liang. The work was one of three finalists for the 2015 Pulitzer Prize for Music.
The concerto commemorates a tragic event that took place in the Xiaoxiang region during the Cultural Revolution. A woman’s husband was killed by a local official. With no way to seek justice, she retaliated by wailing like a ghost in the forest behind the official’s residence every evening. Months later, both the official and the woman went insane.
Liang, a Chinese-born American composer, used electronically transformed sounds to echo the ghostly wailing. Xiaoxiang had its original premiere at the World Saxophone Congress XV in Bangkok, Thailand, and a major revision of the concerto was performed by the Boston Modern Orchestra Project in 2014.
At the Qualcomm Institute Liang is seeking new ways—both technically and artistically—of bringing past sensibilities to modern audiences. He is also collaborating with scientists and engineers on creating databases and multimedia software tools to explore and safeguard recordings, and compose works to showcase these cutting-edge, digital technologies.
Recently he premiered some of his new work as Hearing Landscapes at the university’s Calit2 Theater. A standing-room-only crowd listened and watched the big-screen journey into twelve Chinese watercolor paintings by Huang Binhong (1865–1955). The music was accompanied by high-resolution images of the landscapes, captured by a team of cultural heritage engineers led by professor Falko Kuester, director of UC San Diego’s Center of Interdisciplinary Science for Art, Architecture, and Archaeology (CISA3). The multispectral imaging of the paintings provided insights into the artist’s creative process, which Liang then used to compose the music.
Hearing Landscapes included a piece by doctoral student Greg Surges, who developed audio software that translated visual cues from the images to the sound environment. Scholars in materials science, computer programming and analysis, and robotics engineering also assisted with the project.