The Bayesian Backfitting Relevance Vector Machine
Proc. of International Conference on Machine Learning (ICML 2004)
MetadataShow full item record
Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, are usually computationally prohibitive. This paper makes several important contributions that allow Bayesian learning to scale to more complex, real-world learning scenarios. Firstly, we show that back tting | a traditional non-parametric, yet highly e cient regression tool | can be derived in a novel formulation within an expectation maximization (EM) framework and thus can nally be given a probabilistic interpretation. Secondly, we show that the general framework of sparse Bayesian learning and in particular the relevance vector machine (RVM), can be derived as a highly e cient algorithm using a Bayesian version of back tting at its core. As we demonstrate on several regression and classi cation benchmarks, Bayesian back tting o ers a compelling alternative to current regression methods, especially when the size and dimensionality of the data challenge computational resources.