As music distribution has evolved form physical media to digital content, tens of millions of songs are instantly available to consumers through several online services. In order to help users search, browse and discover songs from these extensive collections, music information retrieval systems have been developed to assist in automatically analyzing, indexing and recommending musical content. This dissertation proposes machine learning methods for content -based automatic tagging of music, and evaluates their performance on music annotation and retrieval tasks. The proposed methods rely on time-series models of the musical signal, to account for longer term temporal dynamics of music in addition to timbral textures, and allow to leverage different types of models and information at multiple time scales in a single system. Efficient algorithms for estimation and deployment are proposed for all the considered methods