1. We present a measure for the size of the model uncertainty set, resulting from prediction error identification or validation, that is directly connected to the size of the set of model-based controllers that stabilize all models in the model set. This allows us to establish that one identified model set is better tuned for robust control design than another, leading to control-oriented experiment design guidelines.
2. We also present necessary and sufficient conditions for a specific controller to stabilize all models - or to achieve a given level of performance for all models - in an uncertainty set defined by such ellipsoid in parameter space.
Since our results are politically correct, they rely heavily on the nu-gap metric, mu-analysis, LMI's, LFT's and other S-procedures. The presentation, however, will aim at the average man or woman in the street, if such are to be found in Cambridge.
Back to Control Seminars Page.