Optimization Over State Feedback Policies for Robust Control with Constraints
Goulart P. J. and Kerrigan E. C. and Maciejowski J. M.
Automatica, Volume 42, Number 4, Pages 523--533, April 2006Abstract
This paper is concerned with the optimal control of linear
discrete-time systems subject to unknown but bounded
state disturbances and mixed constraints on the state and input. It
is shown that the class of admissible affine state feedback control
policies with memory of prior states is equivalent to the class of
admissible feedback policies that are affine functions of the past
disturbance sequence. This implies that a broad class of
constrained finite horizon robust and optimal control problems,
where the optimization is over affine state feedback policies, can
be solved in a computationally efficient fashion using convex
optimization methods. This
equivalence result is used to design a robust receding horizon
control (RHC) state feedback policy such that the closed-loop system
is input-to-state stable (ISS) and the constraints are satisfied for
all time and all allowable disturbance sequences. The cost that is
chosen to be minimized in the associated finite horizon optimal
control problem is quadratic in the disturbance-free state and input
sequences. The value of the receding horizon
control law can be calculated at each sample instant using a single,
tractable and convex quadratic program (QP) if the disturbance set
is polytopic,
or a tractable second-order cone program (SOCP) if the
disturbance set is
given by a 2-norm bound.
Pre-Prints
[PDF]
BibTex Entry
- @Article{goulart:kerrigan:maciejowski:2006,
- author = {Goulart P. J. and Kerrigan E. C. and Maciejowski J. M.},
- journal = {Automatica},
- title = {Optimization Over State Feedback Policies for Robust Control with Constraints},
- year = {2006},
- bibkey = {goulart:kerrigan:maciejowski:2006},
- month = {April},
- number = {4},
- pages = {523--533},
- volume = {42}
- }
|