+abstract: "Fast dynamics, nonlinearities, and tedious tuning are just a few of the many reasons why we are interested in leveraging learning for control—challenges that are ubiquitous in robotics and other physical machines. In this talk, we will explore the problem of learning controllers through three paradigms, organized from general to structured learning problems: deep reinforcement learning, automatic imitation learning from optimal control, and auto-tuning via Bayesian optimization. I will highlight some of our recent results addressing key challenges faced in practice, such as enhanced uncertainty quantification for improved data efficiency and reliability in model-based reinforcement learning, as well as parameter-adaptive approximate model predictive control for imitation learning without retraining. By discussing these advancements alongside applications—demonstrated through hardware experiments on unicycle robots, quadcopters, and cars—I aim to develop an understanding of the potential of these paradigms in both research and current practice."
0 commit comments