«

»

Jun
02

Top Ten Limitations – Deadtime

Deadtime sets the ultimate limit to loop performance. Deadtime is any delay in the action needed to exceed the threshold for the start of error correction. The loop deadtime is the sum of pure actuation, correction, or recognition delays and the equivalent deadtime from lags smaller than the largest time constant in the loop. For unmeasured disturbances and controllers tuned for maximum disturbance rejection, the peak and integrated errors are proportional to the deadtime and deadtime squared, respectively. To avoid deadtime in your understanding of deadtime see my Dec 30, 2010 entry Universal Concept – Deadtime. So given that deadtime is the culprit, what can you do to reduce loop deadtime?

Top Ten Ways to Reduce Loop Deadtime
(1) Improve axial mixing (also known as back mixing)
(2) Avoid controlling around volumes in series with a single loop (don’t control composition, level, pH, pressure, or temperature in volume 2 by manipulating a stream to volume 1)
(3) Increase process fluid velocity at thermowells and electrodes to reduce sensor lag
(4) Reduce transportation delay from the control point to thermowells and electrodes
(5) Decrease injection delay of reactants and reagents
(6) Reduce backlash (deadband) and stiction (stick-slip) in control valves
(7) Improve sensitivity threshold in actuators, positioners, and sensors
(8) Reduce sample transportation delay and cycle time of analyzers
(9) Use in-process sensors with a continuous measurement (e.g. Coriolis meter or probes to measure capacitance, chlorine, conductivity, dissolved oxygen, and pH) instead of analyzers with a sample system and a processing cycle time.
(10) Reduce digital update times that are more than 20% of the loop deadtime to prevent increasing ultimate limit to peak error by more than 10%

The deadtime from discontinuous updates is 50% of the update time plus latency (time to when the result is available). For digital update times (scan time, wireless update rate, and module execution rate) where the latency is negligible, the deadtime is simply ½ the update time. For analyzers where the result is available at the end of the cycle time, the latency is the cycle time and the deadtime is 1½ times the analyzer cycle time.

There is also a potential loss of information from discontinuous updates. The maximum rate of change multiplied by the digital update time must be less than the maximum allowable control error. For fast processes, you need fast update times to even have a chance of acceptable performance. I use to think temperature was one of the slow processes where you could always get by with a digital update time of several seconds. Then in my interview for Control Talk Ultimate Limit to Performance, an example was given where a 1 sec scan time introduced an error of 1.6 degrees for a ramp rate of 100 degree per minute as a wafer went from room temperature to 1150 degrees in semiconductor production.

If the deadtime is in recognition possibly due to sensor, measurement, or controller update time or lag, technically there is no addition to the loop deadtime in terms of a correction for a setpoint change or feedforward signal. If the feedforward or setpoint action was perfect, recognition deadtime does not affect loop performance. However, this is not much consolation because the signal seen by the controller has the recognition deadtime. You would not know how good you are doing. This is the dilemma I saw with the enhanced PID with a perfect setpoint response.

In real life there are always unanticipated errors. Recognition delays and lags increase the phase shift and ultimate period for the correction of these errors. Some of the sources are imprecise valves, non-ideal tuning, feedforward calculation inaccuracies, nonlinearities, unmeasured disturbances, and the continual ramping action of the integral mode. Less obvious is the need to correct the extreme action of controller gains that are designed to give a “closed loop time constant” (Lambda) much faster than the largest time constant (hopefully the process time constant) seen in the process response to a change in the manual PID output called the “open loop time constant”. Some controllers and tuning methods prevent this need to put on the brakes as the process variable approaches setpoint by tuning the PID to provide a “closed loop time constant” equal to the “open loop time constant” by using a controller gain that is the inverse of the process gain. While this works well for loops where the deadtime is greater than the “open loop time constant”, for many loops important for product quality, such as crystallizer, evaporator, neutralizer, and reactor pressure and temperature control, this tuning can increase the peak error for disturbances and rise time for setpoint changes by an order of magnitude. The result can be abnormal operation, SIS activation, and excessive batch, startup, and transition times. These loops can use a controller gain 10 times the inverse of the process gain because the process time constant to loop deadtime ratio is greater than 20:1. This sets us up for a discussion next week on the practical limit to control loop performance – PID tuning.