Now showing items 1-10 of 11
Scaling Reinforcement Learning Paradigms for Motor Control
Reinforcement learning offers a general framework to explain reward related learning in artificial and biological motor control. However, current reinforcement learning methods rarely scale to high dimensional movement systems ...
Kernel Carpentry for Online Regression using Randomly Varying Coefficient Model
We present a Bayesian formulation of locally weighted learning (LWL) using the novel concept of a randomly varying coefficient model. Based on this
LWPR: A Scalable Method for Incremental Online Learning in High Dimensions
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear func- tion approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs ...
Reinforcement Learning for Humanoid Robots - Policy Gradients and Beyond
Reinforcement learning offers one of the most general frameworks to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like ...
The Bayesian Backfitting Relevance Vector Machine
(ACM Press, 2004-07)
Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, ...
Local Dimensionality Reduction for Non-Parametric Regression
Locally-weighted regression is a computationally-efficient technique for non-linear regression. However, for high-dimensional data, this technique becomes numerically brittle and computationally too expensive if many ...
Efficient Learning and Feature Selection in High Dimensional Regression
(MIT Press, 2010)
We present a novel algorithm for efficient learning and feature selection in high-dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a ...
Incremental Online Learning in High Dimensions
(MIT Press, 2005-12)
Locally weighted projection regression (LWPR) is a new algorithm for incremental non-linear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core
A Library for Locally Weighted Projection Regression
In this paper we introduce an improved implementation of locally weighted projection regression (LWPR), a supervised learning algorithm that is capable of handling high-dimensional input data. As the key features, our ...
Bayesian Kernel Shaping for Learning Control
In kernel-based regression learning, optimizing each kernel individually is useful when the data density, curvature of regression surfaces (or decision boundaries) or magnitude of output noise varies spatially. Previous ...