Now showing items 1-6 of 6
Reconstructing Null-space Policies Subject to Dynamic Task Constraints in Redundant Manipulators
We consider the problem of direct policy learning in situations where the policies are only observable through their projections into the null-space of a set of dynamic, non-linear task constraints. We tackle the issue ...
LWPR: A Scalable Method for Incremental Online Learning in High Dimensions
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear func- tion approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs ...
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular and useful choice as a basis function. However, ...
Reinforcement Learning for Humanoid Robots - Policy Gradients and Beyond
Reinforcement learning offers one of the most general frameworks to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like ...
Implications of different classes of sensorimotor disturbance for cerebellar-based motor learning models
The exact role of the cerebellum in motor control and learning is not yet fully understood. The structure, connectivity and plasticity within cerebellar cortex has been extensively studied, but the patterns of connectivity ...
Synthesising Novel Movements through Latent Space Modulation of Scalable Control Policies
We propose a novel methodology for learning and synthesising whole classes of high dimensional movements from a limited set of demonstrated examples that satisfy some underlying ’latent’ low dimensional task constraints. ...