Theses Doctoral

On the Creation and Use of Forward Models in Robot Motor Control

Hannigan, Emily Jean

Advancements in robotics have the potential to aid humans in many realms of exploration as well as daily life: from search and rescue work, to space and deep sea exploration, to in-home assistance to improve the quality of life for those with limited mobility. One of the main milestones that needs to be met for robotics to achieve these ends is a robust ability to manipulate objects and locomote in cluttered and changing environments. A prerequisite to these skills is the ability to understand the current state of the world as well as how actions result in changes to the environment; in short, a robot needs a way to model itself and the world around it. With recent advances in machine learning and access to cheap and fast computation, one of the most promising avenues for creating robust models is to learn a neural network to approximate the dynamics of the system.

Learning a data-driven model that accurately replicates the dynamics of a robot and its environment is an active area of robotics research. This model needs to be accurate, it needs to operate using sensors that are often high dimensional, and it needs to be robust to changes within the system and the surrounding environment. In this thesis, we investigate ways to improve the processof learning data-driven dynamics models as well as ways to reduce the dimensionality of a robot’s state space.

We start by trying to improve the long-term accuracy of neural network based forward models. Learning forward models is more complicated than it appears on the surface. While it is easy to learn a model to predict the change of a system over a short horizon, it is challenging to assure this performance over a long horizon. We investigate the concept of adding temporal information into the loss function of the forward model during training; we demonstrate that this improves the accuracy of a model when it is used to predict over long horizons.

While we are currently working with low dimensional systems, we eventually want to apply our learned models to robots with high dimensional state spaces. To make learning feasible, we need to find ways to learn a lower dimensional representation of the state space (also known as a latent space) to make learning models in the real world computationally feasible. We present a method to improve the usefulness of a learned latent space using a method we call context training: we learn a latent space alongside a forward model to encourage the learned latent space to retain the variables critical to learning the dynamics of the system.

In all of our experiments, we spend significant time in analysis and evaluation. A large portion of literature demonstrating the effectiveness of data-driven forward models in robot control settings often only presents the final controller performance. We were often left curious about what the model was learning independent of the control scenario. We set out to do our own deep dive into exactly what data-driven forward models are predicting. We evaluate all of our models over long horizons. We also look deeper than just the mean and median loss values. We plot the full distribution of loss values over the entire horizon. The literature on data-driven models that do evaluate model prediction accuracy often focuses on the mean and median prediction errors; while these are important metrics, we found that looking at these metrics alone can sometimes obscure subtle but important effects. A high mean loss is often a result of poor performance on only a subset of the test dataset; one model can outperform other models with lower mean error values on a majority of the test set, but it can be skewed to look like the worst performer by having a few highly inaccurate outliers.

We observe that models often have a subset of a test dataset on which they perform best; we seek to limit the use of a model to regions of the test dataset where it has high accuracy by using an ensemble of models. We find that if we train an ensemble of forward models, the accuracy of the models is higher when they all agree on a prediction. Conversely, when the ensemble of models disagrees, the prediction is often poor. We explore this relationship and propose future ways to apply it.

Finally, we look into the application of improved model accuracy and context trained latent spaces. We start by testing the performance of our context training architecture as a method to reduce the state space dimensionality in a model-free reinforcement learning (MFRL) reaching task. We hypothesize that a policy trained with a latent space observation derived using our context trained encoder will outperform a policy trained with a latent space observation derived from a standard autoencoder. Unfortunately, we found no difference in task performance between the policies learned using either method. We end on a bright note by looking at the power of model-based control when we have access to an accurate model. We successfully use model predictive control (MPC) to generate robust locomotion for a simulated snake robot. With access to an accurate model, we are able to generate realistic snake gaits in a variety of environments with very little parameter tuning that are robust to changes in the environment.


  • thumnail for Hannigan_columbia_0054D_18095.pdf Hannigan_columbia_0054D_18095.pdf application/pdf 3.71 MB Download File

More About This Work

Academic Units
Mechanical Engineering
Thesis Advisors
Ciocarlie, Matei Theodor
Ph.D., Columbia University
Published Here
September 27, 2023