Now showing items 1-10 of 59
A Probabilistic Approach to Robust Shape Matching and Part Decomposition
We present a probabilistic approach to shape matching which is invariant to rotation, translation and scaling. Shapes are represented by unlabeled point sets, so discontinuous boundaries and non-boundary points do not ...
Scaling Reinforcement Learning Paradigms for Motor Control
Reinforcement learning offers a general framework to explain reward related learning in artificial and biological motor control. However, current reinforcement learning methods rarely scale to high dimensional movement systems ...
Robustness of VOR and OKR adaptation under kinematics and dynamics transformations
Many computational models of vestibulo-ocular reflex (VOR) adaptation have been proposed, however none of these models have explicitly highlighted the distinction between adaptation to dynamics transformations, in which ...
Kernel Carpentry for Online Regression using Randomly Varying Coefficient Model
We present a Bayesian formulation of locally weighted learning (LWL) using the novel concept of a randomly varying coefficient model. Based on this
LWPR: A Scalable Method for Incremental Online Learning in High Dimensions
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear func- tion approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs ...
Reconstructing Null-space Policies Subject to Dynamic Task Constraints in Redundant Manipulators
We consider the problem of direct policy learning in situations where the policies are only observable through their projections into the null-space of a set of dynamic, non-linear task constraints. We tackle the issue ...
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular and useful choice as a basis function. However, ...
Reinforcement Learning for Humanoid Robots - Policy Gradients and Beyond
Reinforcement learning offers one of the most general frameworks to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like ...
Behaviour Generation in Humanoids by Learning Potential-based Policies from Constrained Motion
(Taylor and Francis, 2008-12)
Movement generation that is consistent with observed or demonstrated behaviour is an efficient way to seed movement planning in complex, high-dimensionalmovement systems like humanoid robots.We present a method for learning ...
The Bayesian Backfitting Relevance Vector Machine
(ACM Press, 2004-07)
Traditional non-parametric statistical learning techniques are often computationally attractive, but lack the same generalization and model selection abilities as state-of-the-art Bayesian algorithms which, however, ...