Cheng (Calvin) Yang's Music IR Research Page

Content-Based Music Retrieval

Developments in internet technology have made a large volume of multimedia data, in particular music audio data, available to the general public, and yet there are not many search tools that can help users search through these data. Most existing search tools rely on file names or text labels, but they become useless when meaningful text descriptions are not available. A truly content-based music retrieval system should have the ability to find similar songs based on their underlying score or melody, regardless of their text description. Past research on content-based music retrieval has primarily focused on score-based data such as MIDI, rather than raw audio music. However, most music data is found in various raw audio formats, and there is no known algorithm to convert raw audio music files into MIDI-style representation.

My primary research interest is on content-based music retrieval for raw audio databases; both the underlying database and the user query are given in raw audio formats such as .wav.

Similarity is based on the intuitive notion of similarity perceived by humans: two pieces are similar if they are fully or partially based on the same score, even if they are performed by different people or at different tempo. More specifically, we identify five different types of "similar" music pairs, with increasing levels of difficulty:

Our current retrieval system can deal with the first 4 types of "similarity" with reasonable accuracy.

Publications

Other Publications

Cheng (Calvin) Yang / yangc@cs.stanford.edu / Stanford University Database Group