Friday, December 20, 2013

Improved Tag and History Data Precision in INFI 90 Systems

INFI 90 is based on the concept of Exception Reports (XRs) that reduce the communication load needed to get good values to HMI systems, historians, OPC servers and to be used in other parts of the system.  Analog values have a "significant change specification" to prevent load from analog values that are not changing significantly.

History

This history is approximate, as told to me by gurus over the past years.  

Net 90 and INFI 90 started with a 1 MBaud plant loop communication ring, plus the impediment that a single XR was sent for each enrolled user.  These two factors made it easy to overload the capacity of the system.

Infi Loop introduced both a 10 MBaud communication ring and a protocol that allowed multiple destinations on the ring for an XR, so the load caused by an exception report no longer changed if more consoles used the tag, for example.

Pragma

Practical considerations were developed to avoid the loss of exception reports in the early systems, and developed further as the systems got both bigger and faster.  Bigger, of course, meant more load, but faster meant more capability.  Initially, the practical suggestion and default was that a change of 1% of span in and analog value was a good compromise.  Sensors were often enough not very accurate and it seemed logical.

The Modern World

In the modern world, there are aspects you should take into account:
  1. INFI 90 communication capability increases have not usually been translated into more precise data.
  2. Sensors have often changed from 2 1/2 digits to 3 1/2 digits, ten times as precise.
  3. Techniques now exist to easily monitor exception report delay and communication loading.
  4. Decisions based on data that is not sensitive enough are made regularly, probably with affecting your plant negatively at times.  This applies to both console operations and historical data.
A Case Study

In a plant, examining the DBDOC Significant Change Report, we noted a flow rate with a span of 7000 and default 1% significant change.  Because it was being imported into another module by exception report, we could study the suitability of the significant change setting.

There were 23 exception reports in the period shown, 18 of them at Tmax of 75 seconds.

Here are some salient aspects of the raw and imported values:

  • Raw value is the pink line.
  • Imported value is the blue stepped line.
  • Working range is under 400.
  • Tmax is 75 seconds, and all positive-going exception reports were caused by the timeout.
  • The negative-going exception reports were triggered by the drop of 70 before 75 seconds had elapsed.
The green sections show where the imported exception report value was less than the actual value.  The orange ones show where it was greater.  Interestingly enough (to me, anyhow), I had joked in front of clients about how bad it would be to be integrating exactly this sort of a variable.  You would find the accountants for the customer very happy when you told them you had used the blue step function to decide how much they should pay.  They would pay fast, and not quibble.

Out of curiosity, I estimated the performance using 0.1% significant change.  I did this by simply taking the raw value as a start and making a pseudo-XR when the value increased or decreased by 7.0 or more.  This gave me an estimate of how the imported values would look and what the performance would be.  The number of exception reports would have been about 140 in 1420 seconds, or about 6 per minute.  Only one period of 75 seconds went by when there would not have been a significant change. 

What does this look like from a numerical perspective?

Significant Change       1%      0.1%

Mean error            -8.1      -1.3
% of full span       -0.12%    -0.02%
% of working range   -2.02%    -0.34%

Mean error magnitude  29.0       3.5
% of full span        0.41%     0.05%
% of working range    7.24%     0.87%

The Bottom Line

By using the data available through DBDOC Watch Window (or Composer Trending, or any other block value monitoring package), you can see that it is perfectly possible for an exception reported value to be handled in a way that could be problematical.  The data comes from an exception report import block, but applies to every value on every graphic, and to every historical value in every INFI 90 system.

At a small cost in increased exception reports, using capacity that can be verified as being available safely, process values can be much more precise.  The full analysis that is done here is not necessary.  What you need to do is simply:
  • Identify imported values, tags and historical data that needs more precision
  • Monitor the node communication loading
  • Where there is capacity, use it by making the significant change specification tighter
The improvements in console, history and perhaps control and shutdowns will be significant.  You probably have the capacity to do this right now.

No comments:

Post a Comment