This course gives an introduction to the theory and application of optimal control, for both linear and nonlinear systems. It mainly focuses on the key ideas and underlying theoretical concepts, but also covers basic algorithms for solving optimal control problems. The four topics covered are
- Nonlinear programming approach to optimal control,
- Dynamic programming and Hamilton-Jacobi-Bellman theory,
- Receding horizon optimal control (model predictive control and moving horizon estimation), and
- Pontryagin’s maximum principle and calculus of variations.
Basic knowledge about dynamical (linear) systems and control (e.g., System Theory I and II) is required as prerequisite.
More detailed information on the course may be found on RWTHonline.
The teaching material is available on RWTHmoodle.