Last week we discussed the effect of disturbance timing on performance. This week we turn our attention to the location and speed of the upset and the Delay/Lag (dead time to time constant) ratio of the process.
Most control text books and papers show a step disturbance on the process output, which is the process measurement. This is the worst case scenario in that the disturbance fully hits the controller before the controller can take any corrective action. The abrupt change in the process measurement can cause a large step and bump in the controller output from gain and rate action, respectively. In some respects, this disturbance location is similar to noise. Conventional Lambda factors (>1.0) do well in helping a controller to not overreact to this disturbance.
Most control literature also tends to focus on a process where the delay (dead time) is comparable in size or larger than the lag (time constant). In these cases, conventional Lambda factors again give good performance and robustness.
I have often heard professors and operators say that a loop is terrible because it has a huge lag (process time constant). This is true for disturbances downstream of the process entering directly into the measurement. For a load upset (e.g. feed, utility, or ambient upset) into the process, the large process time constant (Delay/Lag < < 1.0) can provide incredibly tight control if a much smaller Lambda factor is used (<<1.0).
Most of the important loops I have worked on in the chemical industry (column or vessel composition, pressure, and temperature control), have disturbances on the process input and a Delay/Lag ratio much less than one. The book New Directions in Bioprocess Modeling and Control discusses how the interactive process temperature time constants cause the Delay/Lag ratio to be about 0.2 and how batch composition responses have a Delay/Lag ratio so small they look like they have an integrating process response.
Static mixers used for neutralization have a Delay/Lag ratio about one but the addition of the electrode time constant or signal filter makes the Delay/Lag ratio less than one. Poor reagent piping, injection, and mixing design and a large control valve dead band or resolution limit, can cause the delay to sky rocket. Large Delay/Lag ratios are often a symptom of poor plant/system design for chemical processes. On the other hand, there are processes, such as sheet or web thickness, and analyzers with large cycle times and transportation delays that make the loop very dead time dominant (Delay/Lag >> 1.0).
Feed composition, catalyst activity, metabolic pathway, and ambient temperature disturbances are generally very slow (upset lag of hours). Cooling water and steam disturbances can be faster depending upon system design (upset lag of minutes). Feed flow disturbances are much faster and generally reflect the response from reset action (upset lag of seconds). Step flow changes occur when pumps are turned-on and on-off (isolation valves) are opened.
As the upset slows down (upset lag increases), the peak error (maximum deviation) and integrated absolute error (total error) decreases but the fractional improvement in IAE from more aggressive tuning stays the same for loops with a large process time constant (Delay/Lag < 1.0) or increases for dead time dominant loops (Delay/Lag > 1.0). In a way, the upset lag performs a similar task to the process time constant in terms of slowing down the excursion rate of the process variable.
If there were no upsets, you wouldn’t need a controller. You could just set the control valve to a predetermined position.
The following screen prints and excel file compares the performance of different types of tuning for various Delay/Lag ratios for load upsets that enter as process inputs. Lambda tuning does well for dead time dominant processes and can made to do as well as the Simplified Internal Model Control (SIMC) for lag dominated processes by the use of a Lambda equal to the dead time (Lambda factor equal to the Delay/Lag ratio). See our first blog on the Unification of Tuning Methods for more info.
Not discussed here is interaction and noise and how it reduces the desired degree of transfer of variability from the controlled variable (controller PV) to the manipulated variable (controller output). Also, not addressed is what change in the loop gain, delay, and lag (nonlinearity) can occur and does this change in dynamics make the loop too oscillatory. In general there is a trade off between performance and robustness whenever you are tuning a controller. Larger Lambda factors reduce the transfer of variability and improve the robustness of the controller. In summary, to evaluate a control strategy, algorithm, or tuning one should consider:
(1) Desired degree of transfer of variability from controller PV to controller output
(2) Amount of nonlinearity and its affect on variability
(3) Timing of disturbance
(4) Location of disturbance
(5) Speed of disturbance
(6) Delay/Lag ratio
How upsetting is this to dead compensators and model predictive controllers? For answers to this and more, stay tuned.