I hesitated at first to include sample time as one of the exceptional opportunities in process control because in most loops it is not issue. Then I realized I should give my perspective on the effect of sample time for the following reasons:
(1) Since we live in a digital world, sampled data is the norm. Just from the volume of applications, the opportunity is large
(2) There are no clear guidelines for various types of process control applications
(3) In some applications conventional sample times can cause severe safety and performance issues
(4) In most cases the tuning of the controller dictates that sample times could be significantly slower. If DCS module execution times and wireless communication time intervals could be increased, controller loading is reduced and wireless battery life is prolonged, respectively
(5) If we want more at-line analyzers to provide measurements of stream compositions that tell us what is really going on in the process and offer the opportunity for a more advanced level of control, we need to understand and address sample processing and analyzer cycle times
(6) If we want to move to more wireless measurement that give us the flexibility and intelligence for process control improvement, we need to understand and address wireless communication intervals
I am considering sample time as the time between updates in sampled data in the broadest sense. The following discussion should be useful for determining whether DCS scan or module execution times, wireless communication time intervals, model predictive control execution time, and at-line analyzer cycle time will affect control system performance.
If you are pressed for time you can skip the discussion below and just check out ProcessControlSampleTimes.pdf
There is considerable confusion as to when sample times affect the ability of a control system to compensate for unmeasured disturbances. The following is my quick attempt to provide some concepts to sort out fact from fiction and provide some guidance.
The performance of a control loop depends upon the tuning. Specifically, the peak and integrated errors are inversely proportional to the controller gain. The peak error is not affected much by the integral time setting. However the integrated error is proportional to the integral time. Thus, a loop with good dynamics can be made to perform as poorly as a process with bad dynamics by sluggish tuning. The effect of slow sample times is hidden by large integral times or small controller gains. Thus, it is critical for any comparison, that tuning criteria be specified. In fact there is an implied deadtime as a result of the tuning of the loop as derived and discussed in Advanced Application Note 5. The tuning of the controller puts a practical limit on how fast the sample time must be for the effect to be negligible.
If a controller is tuned for maximum performance, the peak error is proportional to the loop deadtime to process time constant ratio. The integrated error is proportional to the deadtime squared. These statements are strictly true only when the process time constant is large compared to the loop deadtime. The loop dead is the sum of final element deadtime (e.g. valve pre-stroke time delay, deadband, and sticktion), process deadtime (e.g. mixing, thermal, and transportation), automation deadtime (e.g. sensor lag, transmitter damping, and sample times), and small process time constants. All of the time constants smaller than the largest time constant become effectively deadtime in the first order plus deadtime approximation used in industry. Process and automation system dynamics places an ultimate limit on loop performance. There is a corresponding ultimate limit on the sample time.
The relationships between process dynamics (e.g. total loop deadtime), controller tuning, and loop performance is detailed in the Theory section in Chapter 2 of Advanced Control Unleashed, and Appendix C in New Directions in Bioprocess Modeling and Control. All of my books and many of my articles take advantage of the fundamental understanding gained from these relationships.
The effect of sample times can be accessed in terms of practical and ultimate limits on performance. Critical loops where peak errors can cause destruction or environmental releases such as compressor surge control, furnace pressure control, exothermic reactor temperature control, and RCRA pH control, the tuning is necessarily aggressive. As a result the practical limit is much closer to the ultimate limit. For a discussion of cases where exceptionally fast sample times are needed, checkout the April 2, 2007 entry “Analog Control Holdouts.”
For excellent final elements, clean sensors, and transmitter damping settings of 0.2 sec, we can suggest practical and ultimate sample times for different types of processes with typical dynamics. The ultimate limit (fastest conceivable sample time requirement) is set to be less than 1/10th of the sum of the minimum loop deadtime and minimum process time constant with some consideration as to maximum practical controller gains to reduce valve cycling and noise amplification. For any loop with a a large control valve, the minimum loop deadtime is about 1 second for an unmeasured disturbance (unless volume boosters have been added to the output of the positioner) so the ultimate limit on sample time is about 0.1 second. The practical limit reflects current tuning practices (much slower tuning to insure a smooth gradual response despite unknowns and nonlinearities). For integrating processes, the process time constant shown is the inverse of the integrating process gain (denoted by single exclamation point). The double exclamation point denotes a runaway (positive feedback) process time constant. Consultants says it is impossible to generalize but I think some guidance is helpful to the user with the realization there are always exceptions and the actual process dynamic and tuning should be identified by automated online tuners and adaptive controllers (e.g. DeltaV Insight). I didn’t consider ultimate sample times slower than 60 sec. Note that slower sample times will affect the deadtime identified. A Rough Guide to DCS and Measurement (e.g. Wireless) Sample Times is offered in ProcessControlSampleTimes.pdf
For many digital devices the update is available near the beginning of the sample time (latency is negligible), which means the average deadtime from the sample time is about half the sample period. For at-line analyzers (field analyzers with automated sample systems), the result is not available until the end of the sample processing and analyzer cycle time, which translates to an average effective deadtime that is about 1.5 times the time interval between updates in the analyzer output signal. Theses deadtimes determine the minimum peak error for an unmeasured step disturbance at the input to the process.
The detrimental effect of sample time is greater than deadtime in that for continuous sources of dead time such as process transportation and mixing time delays and small process time constants, there is a continuous train of updates. For sampled data there are no intervening values. Consequently, the effects can be worse. For example, there is aliasing of oscillations where the indicated amplitude is smaller and the period is larger than actual. There can be jitter due to variations in latency and lack of synchronization of digital data that introduce variable time delays and noise for rapidly changing signals.
The PIDPLUS modification of the traditional PID developed for wireless applications helps the PID deal with the sample time from digital devices and communication, and at-line analyzers. The improvement is most dramatic for self-regulating processes but is also significant for integrating processes as seen in the tests documented in ControlStudiesPIDPLUS1.pdf. The PID-Plus algorithm also breaks the limit cycle from the resolution limit from the deadband setting for exception reporting of wireless devices because integral action is only done when there is a measurement update.