Communication Interval, Control Execution Time, Analyzer Cycle Time, and Scan Time

We could talk about how important communication is for our society and even more importantly our marriage but let’s stick to something we are more interested in as automation engineers particularly since we essentially have no control over politicians and spouses. So let’s talk about communication intervals, control execution intervals, analyzer cycle times, and input scan times.

We tend to think that faster is better but this is not always the case. For example, a bioprocess control engineer recently suggested model predictive control of growth rate in a fermentor would not work because the changes in growth rate were too small. If you consider it is just a matter of time frame, you see a resolution (pun intended). If an analysis was made every hour, the true change in biomass concentration would be small compared to the repeatability of the analysis. The signal to noise ratio for the rate of change of biomass concentration (biomass growth rate) would be poor. However, process control is still possible if the time interval between analysis data points is increased and the result fed to a rate of change calculation described in the article “Full Throttle Batch and Startup Response” in the May 2006 issue of Control. Note that even though this calculation uses a dead time and velocity (rate) limit block, the proper setup of these blocks does not introduce additional dead time. Further details on the configuration and the proper filtering and rate limiting of the process variable before it goes into the dead time block for the rate of change calculation is offered in the following screen print of a module.

Rate of Change Module

The use of a rate of change as the controlled variable is described for PID control of an exothermic reactor in the book A Funny Thing Happened on the Way to the Control Room and for model predictive control of a bioreactor in the book New Directions in Bioprocess Modeling and Control.

Whether we are talking about analyzers, or any sort of digital communication, control, and processing, a dead time is created for unmeasured disturbances from the time interval. The actual dead time to detecting and reacting to an upset depends upon the relative timing of the read (input), write (output), and the upset. If the output is done right after the input, the dead time varies from nearly zero to one time interval for an upset that arrives just before and after the input, respectively. On the average, we can say the upset arrives in the middle of the interval so the average dead time is 1/2 of the time interval. For unsynchronized digital devices, the worst case dead time could be the summation of the time intervals. If the output is done at the end of the time interval, the dead time varies from one to two time intervals for an upset that arrives just before and after the input, respectively. This is the case for chromatographs and other analyzers where the sample is processed and the analysis is ready at the end of the cycle time. Here the average is 1.5 times the time interval (cycle time). The following slide illustrates the concept.

DeadTime from Discrete Devices and Analyzers

Even when dead time is introduced, it has minimal effect on performance for controllers that were detuned since the integrated absolute error for the upset depends on the controller tuning settings. In my Control Talk column in the November 2006 issue of Control magazine, we discussed how an increase in digital time intervals did not have an affect on a controller tuned with a Lambda factor of one until the total dead time exceeded half of the process time constant. Thus, tests on the effect of intervals and cycle times should use different relative timings of the unmeasured disturbance and various tuning settings.

(The above is an excerpt from my Control Talk column in the upcoming December 2006 issue of Control Magazine. Please see the column for a more complete discussion and the latest “Top Ten List”).