2019 Theses Doctoral
New Stable Inverses of Linear Discrete Time Systems and Application to Iterative Learning Control
Digital control needs discrete time models, but conversion from continuous time, fed by a zero order hold, to discrete time introduces sampling zeros which are outside the unit circle, i.e. non-minimum phase (NMP) zeros, in the majority of the systems. Also, some systems are already NMP in continuous time. In both cases, the inverse problem to find the input required to maintain a desired output tracking, produces an unstable causal control action. The control action will grow exponentially every time step, and the error between time steps also grows exponentially. This prevents many control approaches from making use of inverse models.
The problem statement for the existing stable inverse theorem is presented in this work, and it aims at finding a bounded nominal state-input trajectory by solving a two-point boundary value problem obtained by decomposing the internal dynamics of the system. This results in the causal part specified from the minus infinity time; and its non-causal part from the positive infinity time. By solving for the nominal bounded internal dynamics, the exact output tracking is achieved in the original finite time interval.
The new stable inverses concepts presented and developed here address this instability problem in a different way based on the modified versions of problem states, and in a way that is more practical for implementation. The statements of how the different inverse problems are posed is presented, as well as the calculation and implementation. In order to produce zero tracking error at the addressed time steps, two modified statements are given as the initial delete and the skip step. The development presented here involves: (1) The detection of the signature of instability in both the nonhomogeneous difference equation and matrix form for finite time problems. (2) Create a new factorization of the system separating maximum part from minimum part in matrix form as analogous to transfer function format, and more generally, modeling the behavior of finite time zeros and poles. (3) Produce bounded stable inverse solutions evolving from the minimum Euclidean norm satisfying different optimization objective functions, to the solution having no projection on transient solutions terms excited by initial conditions.
Iterative Learning Control (ILC) iterates with a real world control system repeatedly performing the same task. It adjusts the control action based on error history from the previous iteration, aiming to converge to zero tracking error. ILC has been widely used in various applications due to its high precision in trajectory tracking, e.g. semiconductor manufacturing sensors that repeatedly perform scanning maneuvers. Designing effective feedback controllers for non-minimum phase (NMP) systems can be challenging. Applying Iterative Learning Control (ILC) to NMP systems is particularly problematic. Incorporating the initial delete stable inverse thinkg into ILC, the control action obtained in the limit as the iterations tend to infinity, is a function of the tracking error produced by the command in the initial run. It is shown here that this dependence is very small, so that one can reasonably use any initial run. By picking an initial input that goes to zero approaching the final time step, the influence becomes particularly small. And by simply commanding zero in the first run, the resulting converged control minimizes the Euclidean norm of the underdetermined control history. Three main classes of ILC laws are examined, and it is shown that all ILC laws converge to the identical control history, as the converged result is not a function of the ILC law. All of these conclusions apply to ILC that aims to track a given finite time trajectory, and also apply to ILC that in addition aims to cancel the effect of a disturbance that repeats each run.
Having these stable inverses opens up opportunities for many control design approaches. (1) ILC was the original motivation of the new stable inverses. Besides the scenario using the initial delete above, consider ILC to perform local learning in a trajectory, by using a quadratic cost control in general, but phasing into the skip step stable inverse for some portion of the trajectory that needs high precision tracking. (2) One step ahead control uses a model to compute the control action at the current time step to produce the output desired at the next time step. Before it can be useful, it must be phased in to honor actuator saturation limits, and being a true inverse it requires that the system have a stable inverse. One could generalize this to p-step ahead control, updating the control action every p steps instead of every one step. It determines how small p can be to give a stable implementation using skip step, and it can be quite small. So it only requires knowledge of future desired control for a few steps. (3) Note that the statement in (2) can be reformulated as Linear Model Predictive Control that updates every p steps instead of every step. This offers the ability to converge to zero tracking error at every time step of the skip step inverse, instead of the usual aim to converge to a quadratic cost solution. (4) Indirect discrete time adaptive control combines one step ahead control with the projection algorithm to perform real time identification updates. It has limited applications, because it requires a stable inverse.
- Ji_columbia_0054D_15602.pdf application/pdf 1.47 MB Download File
More About This Work
- Academic Units
- Mechanical Engineering
- Thesis Advisors
- Longman, Richard W.
- Ph.D., Columbia University
- Published Here
- October 30, 2019