Undergraduate Teaching 2018-19

Engineering Tripos Part IIB, 4F3: An Optimisation Based Approach to Control, 2018-19

Engineering Tripos Part IIB, 4F3: An Optimisation Based Approach to Control, 2018-19

Not logged in. More information may be available... Login via Raven / direct.

PDF versionPDF version

Module Leader and lecturer

Dr G Vinnicombe

Lecturer

Dr F.F. Forni

Timing and Structure

Lent term. 14 lectures + 2 examples classes, Assessment: 100% exam

Prerequisites

3F1 and 3F2 useful

Aims

The aims of the course are to:

  • introduce methods for feedback system design based on the optimization of an objective, including reinforcement learning and predictive control.
  • demonstrate how such control laws can be computed and implemented in practice.

Objectives

As specific objectives, by the end of the course students should be able to:

  • understand the derivation and application of optimal control methods.
  • appreciate the main ideas, applications and techniques of predictive control and reinforcement learning.

Content

Introduction: Convex Optimisation (1L, Dr F Forni)

  • Formulation of convex optimisation problems
  • Status of theoretical results and algorithms

Optimal Control (6L + 1 examples class, Dr F Forni)

  • Formulation of optimal control problems. Typical applications
  • Optimal control with full information (dynamic programming)
  • Control of Linear Systems with a quadratic objective function
  • Output feedback: ‘LQG’ control
  • Control design with an “H-infinity” criterion

Predictive Control and an Introduction to Reinforcement Learning (7L + 1 examples class, Dr G Vinnicombe)

  • What is predictive control? Importance of constraints. Flexibility of specifications. Typical applications
  • Basic formulation of predictive control problem without constraints and the receding horizon concept. Comparison with unconstrained Linear Quadratic Regulator
  • Including constraints in the problem formulation. Constrained convex optimization
  • Terminal conditions for stability 
  • Emerging applications: advantages and challenges
  • Policy and generalized policy iteration; rollout algorithms and predictive control
  • Approximate dynamic programming
  • Deep neural nets as universal approximators for value and policy.
  • Simulation based vs state space models - Q learning.

 

Booklists

Please see the Booklist for Group F Courses for references for this module.

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 16/01/2019 17:13