The problem of musical instrument recognition is investigated and its application to music segmentation is examined in this research. We propose a new framework for extracting features, in which audio frames are not placed uniformly as done traditionally. The equal spacing method inevitably leads to the consequence that some frames may contain a transition between notes and/or two notes may be included in one frame. Then, a music frame may consist of sounds from multiple instruments. Onset detection is integrated with frame location selection in this work to mitigate this phenomenon. This new framing scheme, called onsetaware framing scheme, provides comparable or better performance as compared with traditional methods. A new histogram-based feature is also presented and used with other common features in the musical instrument classification task. Feature reduction is adopted to reduce the dimensionality of the feature space. We conduct experiments on data sets of a different size with both synthesized and real musical signals. Finally, a simple segmentation method based on the musical instrument classification is proposed and demonstrated by an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.