1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Iec 62541 13 2015

184 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề IEC 62541-13:2015
Trường học University of Geneva
Chuyên ngành Electrical Engineering
Thể loại Standards Document
Năm xuất bản 2015
Thành phố Geneva
Định dạng
Số trang 184
Dung lượng 3,47 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 3.1 Terms and definitions (11)
  • 3.2 Abbreviations ...................................................................................................... 1 2 (14)
  • 4.1 General ............................................................................................................... 1 2 (14)
  • 4.2 Aggregate Objects .............................................................................................. 1 2 (14)
    • 4.2.1 General ....................................................................................................... 1 2 (14)
    • 4.2.2 AggregateFunction Object ............................................................................ 1 3 (15)
  • 4.3 MonitoredItem AggregateFilter............................................................................. 1 6 (18)
    • 4.3.1 MonitoredItem AggregateFilter Defaults ........................................................ 1 6 (18)
    • 4.3.2 MonitoredItem Aggregates and Bounding Values .......................................... 1 6 (18)
  • 4.4 Exposing Supported Functions and Capabilities ................................................... 1 6 (18)
  • 5.1 General ............................................................................................................... 1 7 (19)
  • 5.2 Aggregate data handling ..................................................................................... 1 8 (20)
    • 5.2.1 Overview ..................................................................................................... 1 8 (20)
    • 5.2.2 ReadProcessedDetails structure overview .................................................... 1 8 (20)
    • 5.2.3 AggregateFilter structure overview ............................................................... 1 8 (20)
  • 5.3 Aggregates StatusCodes ..................................................................................... 1 9 (21)
    • 5.3.1 Overview ..................................................................................................... 1 9 (21)
    • 5.3.2 Operation level result codes ......................................................................... 1 9 (21)
    • 5.3.3 Aggregate Information Bits ........................................................................... 1 9 (21)
  • 5.4 Aggregate details (22)
    • 5.4.1 General (22)
    • 5.4.2 Common characteristics (23)
    • 5.4.3 Specific Aggregated data handling (26)
  • A.1 Historical Aggregate specific characteristics (58)
  • A.2 Interpolative (62)
    • A.2.1 Description (62)
    • A.2.2 Interpolative data (62)
  • A.3 Average (63)
    • A.3.1 Description (63)
    • A.3.2 Average data (64)
  • A.4 TimeAverage (65)
    • A.4.1 Description (65)
    • A.4.2 TimeAverage data (65)
  • A.5 TimeAverage2 (66)
    • A.5.1 Description (66)
    • A.5.2 TimeAverage2 data (66)
  • A.6 Total (67)
    • A.6.1 Description (67)
    • A.6.2 Total data (68)
  • A.7 Total2 (69)
    • A.7.1 Description (69)
    • A.7.2 Total2 data (69)
  • A.8 Minimum (70)
    • A.8.1 Description (70)
    • A.8.2 Minimum data (70)
  • A.9 Maximum (71)
    • A.9.1 Description (71)
    • A.9.2 Maximum data (71)
  • A.20 DurationInStateZero (78)
    • A.20.1 Description (78)
    • A.20.2 DurationInStateZero data (78)
  • A.22 NumberOfTransitions (78)
    • A.22.1 Description (78)
    • A.22.2 NumberOfTransitions data (79)
  • A.23 Start (79)
    • A.23.1 Description (79)
    • A.23.2 Start data (80)
  • A.24 End (80)
    • A.24.1 Description (80)
    • A.24.2 End data (80)
  • A.25 StartBound (81)
    • A.25.1 Description (81)
    • A.25.2 StartBound data (81)
  • A.26 EndBound (81)
    • A.26.1 Description (81)
    • A.26.2 EndBound data (82)
  • A.27 Delta (82)
    • A.27.1 Description (82)
    • A.27.2 Delta data (82)
  • A.28 DeltaBounds (83)
    • A.28.1 Description (83)
    • A.28.2 DeltaBounds data (83)
  • A.29 DurationGood (83)
    • A.29.1 Description (83)
    • A.29.2 DurationGood data (84)
  • A.30 DurationBad (84)
    • A.30.1 Description (84)
    • A.30.2 DurationBad data (84)
  • A.31 PercentGood (85)
  • A.32 PercentBad (86)
    • A.32.1 Description (86)
    • A.32.2 PercentBad data (86)
  • A.33 WorstQuality (87)
    • A.33.1 Description (87)
    • A.33.2 WorstQuality data (87)
  • A.34 WorstQuality2 (88)
    • A.34.1 Description (88)
    • A.34.2 WorstQuality2 data (88)
  • A.35 StandardDeviationSample (89)
    • A.35.1 Description (89)
    • A.35.2 StandardDeviationSample data (89)
  • A.36 VarianceSample (89)
    • A.36.1 Description (89)
    • A.36.2 VarianceSample data (89)
  • A.37 StandardDeviationPopulation (90)
    • A.37.1 Description (90)
    • A.37.2 StandardDeviationPopulation data (90)
  • A.38 VariancePopulation (90)
    • A.38.1 Description (90)
    • A.38.2 VariancePopulation data (91)

Nội dung

Table 2 – Aggre ateConfigurationType DefinitionAtribut e Valu BrowseName Aggre at eCo fig rat io Ty pe IsAbst rac t Fals Su type of t he Bas eO bjectTyp d fin d in IEC 6 5 1 5 The Tre tU

Terms and definitions

For the purposes of this document, the terms and definitions given in IEC TR 62541 -1 , IEC 62541 -3, IEC 62541 -4, and IEC 62541 -1 1 as well as the following apply

ProcessingInterval timespan for which derived values are produced based on a specified Aggregate

The total time domain for ReadProcessed is divided by the ProcessingInterval For instance, conducting a 10-minute average from 12:00 to 12:30 results in three intervals of ProcessingInterval length, starting at 12:00, 12:10, and 12:20 The criteria for determining the interval bounds are outlined in section 5.4.2.

3.1 2 interpolated data that is calculated from data samples

Data samples can consist of either historical data or buffered real-time data An interpolated value is derived from the data points surrounding the specified timestamp.

All aggregate calculations include the start time but exclude the end time In some cases, it is necessary to return an interpolated end bound for an interval with a timestamp within it Servers should utilize the time immediately before the end time, with the server's time resolution determining the exact value For instance, if the end time is 12:01:00 and the time resolution is 1 second, the effective end time would be 12:00:59.

If time is flowing backwards, Servers are expected to use the time immediately after endTime where the time resolution of the Server determines the exact value

3.1 4 extrapolated data constructed from a discrete data set but is outside of the discrete data set

Extrapolation is akin to interpolation, as it generates new data points between known values; however, it carries a higher degree of uncertainty This method is particularly useful when predicting data for a time period that extends beyond the available information in the underlying system For further illustration, refer to Table 1.

Note 1 to entry: Compare to curve fitting using linear polynomials See example in Table 1

SteppedInterpolation holding the last data point constant or interpolating the value based on a horizontal line fit

Note 1 to entry: Consider the following Table 1 of raw and Interpolated/Extrapolated values:

Timestamp Raw Value Sloped Interpolation Stepped Interpolation

3.1 7 bounding values values at the startTime and endTime needed for Aggregates to compute the result

If raw data is unavailable at the specified start and end times, an estimated value must be provided There are two methods for determining bounding values for an interval: Interpolated Bounding Values, which utilize the first non-bad data points before and after the timestamp, and Simple Bounding Values, which rely on the data points immediately surrounding the boundary timestamps, regardless of their quality Further details on these approaches can be found in subclauses 3.1.8 and 3.1.9.

In all cases the TreatUncertainAsBad (see 4.2.1 2) flag is used to determine whether Uncertain values are Bad or non-Bad

If a Raw value was not found and a non-Bad bounding value exists the Aggregate Bits (see 5.3 3) are set to

When determining bounding values, any Raw data marked with a Bad status has its value set to null, meaning it is excluded from calculations Consequently, if a raw value is returned, a null is provided instead The status is defined by the rules established by the bound or Aggregate.

The Interpolated Bounding Values approach, utilized in Classic OPC Historical Data Access (HDA), is crucial for applications like advanced process control that require reliable data at all times In contrast, the new Simple Bounding Values approach is essential for applications generating regulatory reports, as it prohibits the use of estimated values in lieu of bad data.

3.1 8 interpolated bounding values bounding values determined by a calculation using the nearest Good value

Note 1 to entry: Interpolated Bounding Values using SlopedInterpolation are calculated as follows:

• if a non-Bad Raw value exists at the timestamp then it is the bounding value;

• find the first non-Bad Raw value before the timestamp;

• find the first non-Bad Raw value after the timestamp;

• draw a line between before value and after value;

• use point where the line crosses the timestamp as an estimate of the bounding value

The calculation can be expressed with the following formula:

Vbound = ( Tbound – Tbefore )x( Vafter – Vbefore )/( Tafter – Tbefore ) + Vbefore where Vx is a value at ‘x’ and T x is the timestamp associated with V x

If there are no non-Bad values prior to the timestamp, the StatusCode is Bad_NoData The StatusCode is classified as Uncertain_DataSubNormal if any Bad values are present between the before and after values Additionally, if either the before or after value is Uncertain, the StatusCode remains Uncertain_DataSubNormal In cases where the after value is absent, the before value will be extrapolated using either SlopedExtrapolation or SteppedExtrapolation.

The time frame for searching Good values before and after a timestamp is dependent on the Server If a Good value is not located within a reasonable time range, the Server will conclude that it does not exist At a minimum, the Server should search a time range that is at least equal to the size of the ProcessingInterval.

Interpolated Bounding Values using SlopedExtrapolation are calculated as follows:

• find the first non-Bad Raw value before timestamp;

• find the second non-Bad Raw value before timestamp;

• draw a line between these two values;

• extend the line to where it crosses the timestamp;

• use the point where the line crosses the timestamp as an estimate of the bounding value

The formula is the same as the one used for SlopedInterpolation

The StatusCode is always Uncertain_DataSubNormal I f only one non-Bad raw value can be found before the timestamp then SteppedExtrapolation is used to estimate the bounding value

Interpolated Bounding Values using SteppedInterpolation are calculated as follows:

• if a non-Bad Raw value exists at the timestamp then it is the bounding value;

• find the first non-Bad Raw value before timestamp;

• use the value as an estimate of the bounding value

The StatusCode is Uncertain_DataSubNormal if any Bad values exist between the before value and the timestamp

Here is the rewritten paragraph:"When evaluating data before a specified timestamp, if no reliable raw data exists, the StatusCode defaults to Bad_NoData Conversely, if the preceding value is uncertain, the StatusCode is set to Uncertain_DataSubNormal Notably, when utilizing SteppedInterpolation, the value following the timestamp is inconsequential, unless the timestamp exceeds the data's end point, in which case the bounding value is treated as extrapolated and the StatusCode is again set to Uncertain_DataSubNormal."

SteppedExtrapolation is a term that describes SteppedInterpolation when a timestamp is after the last value in the history collection

3.1 9 simple bounding values bounding values determined by a calculation using the nearest value

Note 1 to entry: Simple Bounding Values using SlopedInterpolation are calculated as follows:

• if any Raw value exists at the timestamp then it is the bounding value;

• find the first Raw value before timestamp;

• find the first Raw value after timestamp;

• use point where the line crosses the timestamp as an estimate of the bounding value

The formula is the same as the one used for SlopedInterpolation in 3.1 5

If a raw value at a specific timestamp is classified as Bad, the StatusCode will be Bad_NoData Similarly, if the preceding value is Bad, the StatusCode remains Bad_NoData In cases where the prior value is Uncertain, the StatusCode changes to Uncertain_DataSubNormal Additionally, if the value following the timestamp is either Bad or Uncertain, the StatusCode will also be Uncertain_DataSubNormal.

Simple Bounding Values using SteppedInterpolation are calculated as follows:

• if any Raw value exists at the timestamp then it is the bounding value;

• find the first Raw value before timestamp;

• if the value before timestamp is non-Bad then it is the bounding value

If the raw value at a given timestamp is classified as Bad, the StatusCode will be Bad_NoData Similarly, if the preceding value is Bad, the StatusCode remains Bad_NoData In cases where the prior value is Uncertain, the StatusCode is designated as Uncertain_DataSubNormal.

If either bounding time of an interval is beyond the last data point then extrapolation may be used if the Server feels it is appropriate for the data

Some historians believe that the last raw value does not always signify the end of the data Depending on their understanding of the data collection process, including update frequency and latency, historians may extend the last value to a time they know is included When determining simple bounding values, historians will treat it as if there is an additional raw value at that timestamp.

If an interval begins before the earliest historical data point and ends after it, the interval will be considered to extend from the first historical data point to its latest time, with the StatusCode reflecting a Partial bit set.

The time frame for searching values before and after a timestamp is dependent on the server If a value is not located within a reasonable time range, the server will conclude that it does not exist At a minimum, the server should search a time range that is at least equal to the size of the Processing Interval.

Abbreviations 1 2

HA Historical Access (access to historical data or events)

General 1 2

This standard outlines the representation of aggregate historical and buffered real-time data within the OPC Unified Architecture, detailing the aggregates utilized for processed data retrieval and historical data access It encompasses both standard reference types and object types.

Aggregate Objects 1 2

General 1 2

OPC UA Servers offer a variety of functionalities and capabilities, utilizing standard Objects to present these features consistently Additionally, vendors can extend several defined standard concepts to enhance their offerings.

The AggregateConfigurationType outlines the essential features of a Node that specifies the Aggregate configuration for any Variable or Property The AggregateConfiguration Object serves as the primary access point for understanding how the Server manages Aggregate-specific functions, including the handling of Uncertain data, as detailed in Table 2.

References NodeClass BrowseName DataType TypeDefinition ModellingRule

Subtype of the BaseObjectType defined in IEC 62541 -5

HasProperty Variable TreatUncertainAsBad Boolean PropertyType Mandatory

HasProperty Variable PercentDataBad Byte PropertyType Mandatory

HasProperty Variable PercentDataGood Byte PropertyType Mandatory

HasProperty Variable UseSlopedExtrapolation Boolean PropertyType Mandatory

The TreatUncertainAsBad Variable determines how the Server handles data with an Uncertain StatusCode during Aggregate calculations When set to True, the Server treats this severity as Bad; when set to False, it treats it as Good, unless specified otherwise in the Aggregate definition The default setting is True, but the StatusCode remains classified as Uncertain when calculating the result.

The PercentDataBad Variable specifies the minimum percentage of bad data necessary for the StatusCode of processed data requests to be classified as Bad within a specified interval Uncertain data is handled as previously defined For guidance on utilizing this Variable for StatusCode assignments, refer to section 5.4.3 Additionally, details regarding which Aggregates utilize the PercentDataBad Variable can be found in the definitions of each Aggregate, with a default value set at 100.

The PercentDataGood Variable specifies the minimum percentage of Good data needed in a specific interval for the StatusCode of processed data requests to be classified as Good For more information on utilizing this Variable for StatusCode assignments, refer to section 5.4.3 Additionally, details regarding which Aggregates utilize the PercentDataGood Variable can be found in the definitions of each Aggregate The default setting for this Variable is 100%.

The relationship between PercentDataGood and PercentDataBad is defined as PercentDataGood ≥ (100 – PercentDataBad) In cases where these values are equal, the calculation for PercentDataGood is utilized If the inputs for PercentDataGood and PercentDataBad lead to an invalid calculation, such as Bad = 80 and Good = 0, the outcome will be assigned a StatusCode of Bad_AggregateInvalidInputs.

The UseSlopedExtrapolation Variable determines how the Server interpolates data in the absence of a boundary value, specifically when extrapolating future values from the last known data point When set to False, the Server employs a SteppedExtrapolation format, maintaining the last known value Conversely, a True setting enables the Server to project values using the UseSlopedExtrapolation mode By default, this variable is set to False, and it is disregarded for SimpleBounds.

AggregateFunction Object 1 3

The Object serves as the entry point for browsing information about the Aggregates supported by a Server, with its content defined by its type definition All instances of the FolderType utilize the standard BrowseName 'AggregateFunctions' The HasComponent Reference connects a ServerCapabilities Object and/or any HistoricalServerCapabilities Object to an AggregateFunction Object, as formally outlined in Table 3.

Class BrowseName DataType TypeDefinition ModellingRule

Type FolderType Defined in IEC 62541 -5

Each ServerCapabilities and HistoricalServerCapabilities Object shall reference an AggregateFunction Object In addition, each HistoricalConfiguration Object belonging to a HistoricalDataNode may reference an AggregateFunction Object using the HasComponent Reference

This ObjectType defines an Aggregate supported by a UA Server This Object is formally defined in Table 4

Rule Subtype of the BaseObjectType defined in IEC 62541 -5

For the AggregateFunctionType, the Description Attribute (inherited from the Base NodeClass), is mandatory The Description Attribute provides a localized description of the Aggregate

Table 5 specifies the BrowseName and Description Attributes for the standard Aggregate Objects The description is the localized “en” text For other locales it shall be translated

Interpolation Aggregate Interpolative At the beginning of each interval, retrieve the calculated value from the data points on either side of the requested timestamp

Average Retrieve the average value of the data over the interval

The TimeAverage function calculates the time-weighted average data over a specified interval by utilizing Interpolated Bounding Values In contrast, the TimeAverage2 function retrieves the time-weighted average data over the same interval, but it employs Simple Bounding Values Additionally, the Total function computes the total time integral of the data across the interval using Interpolated Bounding Values.

To analyze the data over a specified interval, retrieve the total using Simple Bounding Values, which represents the time integral of the data Additionally, identify the minimum raw value within the interval along with its corresponding start timestamp, and determine the maximum raw value in the same interval, also noting its start timestamp Finally, obtain the minimum value within the interval and the timestamp at which this minimum occurs.

MaximumActualTime Retrieve the maximum value in the interval and the timestamp of the maximum value Range Retrieve the difference between the minimum and maximum value over the interval

Minimum2 Retrieve the minimum value in the interval including the Simple Bounding Values

Maximum2 Retrieve the maximum value in the interval including the Simple Bounding Values

To retrieve the minimum value with the actual timestamp, including the Simple Bounding Values, use MinimumActualTime2 For the maximum value with the actual timestamp, also including the Simple Bounding Values, utilize MaximumActualTime2 To find the difference between the Minimum2 and Maximum2 values over the specified interval, apply Range2 Finally, to count the number of raw values within the interval, use the Count function.

Retrieve the duration a Boolean or numeric value remained in a zero state using Simple Bounding Values Additionally, obtain the time a Boolean or numeric value was in a non-zero state through Simple Bounding.

NumberOfTransitions Retrieve the number of changes between zero and non-zero that a Boolean or numeric value experienced in the interval

Start Retrieve the value at the beginning of the interval using Interpolated Bounding Values End Retrieve the value at the end of the interval using Interpolated Bounding Values

Delta Retrieve the difference between the Start and End value in the interval

StartBound Retrieve the value at the beginning of the interval using Simple Bounding Values

EndBound Retrieve the value at the end of the interval using Simple Bounding Values

DeltaBounds Retrieve the difference between the StartBound and EndBound value in the interval

DurationGood Retrieve the total duration of time in the interval during which the data is Good

DurationBad Retrieve the total duration of time in the interval during which the data is Bad

PercentGood Retrieve the percentage of data (0 to 1 00) in the interval which has Good StatusCode PercentBad Retrieve the percentage of data (0 to 1 00) in the interval which has Bad StatusCode

WorstQuality Retrieve the worst StatusCode of data in the interval

The WorstQuality2 function retrieves the lowest StatusCode of data within a specified interval, including Simple Bounding Values The AnnotationCount function provides the total number of Annotations within the interval, applicable only to Historical Aggregates Additionally, the StandardDeviationSample function calculates the standard deviation for a sample of the population in the interval, using the formula (n - 1).

VarianceSample Retrieve the variance for the interval as calculated by the StandardDeviationSample

StandardDeviation Population Retrieve the standard deviation for the interval for a complete population (n) which includes

VariancePopulation Retrieve the variance for the interval as calculated by the StandardDeviationPopulation which includes Simple Bounding Values.

MonitoredItem AggregateFilter 1 6

MonitoredItem AggregateFilter Defaults 1 6

The default values used for MonitoredItem Aggregates are the same as those used for historical Aggregates They are defined in 4.2.1 2 For additional information on MonitoredItem AggregateFilter see IEC 62541 -4.

MonitoredItem Aggregates and Bounding Values 1 6

When calculating MonitoredItem Aggregates that involve Bounding Values, the bounds may be unknown The calculation process resembles that of a historical read with the Partial Bit enabled To account for potential data collection latency and minimize the use of the Partial Bit, the historian typically waits for a brief period, usually not exceeding one processing interval, before performing the interval calculation.

A historical read done after data collection and the data from the MonitoredItem over the same interval may not be the same.

Exposing Supported Functions and Capabilities 1 6

Figure 1 illustrates a potential representation of Aggregate information within the AddressSpace In this scenario, the top-level Server may offer Aggregate functionality for Interpolative, Total, Average, and additional types However, DataVariable X is limited to supporting only Interpolative, Total, and Average, while DataVariable Y accommodates Average, a vendor-defined Aggregate, and other unspecified Aggregates.

Figure 1 – Representation of Aggregate Configuration information in the AddressSpace

5 Aggregate specific usage of Services

General 1 7

IEC 62541 -4 specifies all Services needed for OPC UA Aggregates In particular:

• The Browse Service Set or Query Service Set to detect Aggregates and their configuration

• The HistoryRead Service of the Attribute Service Set to read the aggregated history of the HistoricalNodes

HistoryServer Capabilities (Part 1 1 – Historical Access)

Server Capabilities (Part 5 – Information Model)

HA Configuration (Part 1 1 – Historical Access) HasHistoricalConfiguaration

HA Configuration(Part 1 1 – Historical Access)

Aggregate data handling 1 8

Overview 1 8

The HistoryRead service outlined in IEC 62541 -4 offers various functions, with the historyReadDetails parameter serving as an Extensible Parameter to specify the desired function Additionally, the ReadProcessedDetails structure is utilized to access aggregated data from HistoricalDataNodes.

The CreateMonitoredItems Service enables the specification of a filter for each MonitoredItem, utilizing the MonitoringFilter, which is an adaptable parameter tailored to the monitored item's type For subscriptions, the AggregateFilter structure is employed to retrieve aggregated data.

ReadProcessedDetails structure overview 1 8

ReadProcessedDetails structure is formally detailed in IEC 62541 -1 1 Table 6 outlines the components of the ReadProcessedDetails structure for the purposes of discussion in this document

The ReadProcessedDetails function specifies the parameters for executing a processed history read, including the startTime and endTime to define the reading period It also includes the processingInterval for the frequency of returned aggregate values and the aggregateType[], which lists the NodeIds of the AggregateFunction Objects used for retrieving processed history The aggregateConfiguration structure outlines the configuration settings, while the useServerDefaults parameter, when set to True, overrides other specified values with the server's defaults Additionally, options such as treatUncertainAsBad, percentDataBad, percentDataGood, and useSlopedExtrapolation are available for further data handling as detailed in section 4.2.1.2.

AggregateFilter structure overview 1 8

The AggregateFilter specifies the Aggregate function utilized for calculating the values to be returned, as formally defined in IEC 62541 -4 For clarity, Table 7 presents the components of the AggregateFilter structure relevant to this standard.

The AggregateFilter parameters include startTime, which marks the beginning of the period for calculating the Aggregate, and aggregateType, which specifies the NodeIds of the AggregateFunction Objects for retrieving processed data The processingInterval defines the duration for computing the Aggregate, while aggregateConfiguration allows clients to customize settings on a per monitored item basis If useServerDefaults is set to True, the server's default values will be applied, disregarding other specified parameters Additionally, treatUncertainAsBad, percentDataBad, percentDataGood, and useSlopedExtrapolation are parameters that can be referenced for further details.

Aggregates StatusCodes 1 9

Overview 1 9

Subclause 5.3 defines additional codes and rules that apply to the StatusCode when used for Aggregates

The general structure of the StatusCode is specified in IEC 62541 -4 It includes a set of common operational result codes which also apply to Aggregates.

Operation level result codes 1 9

In OPC UA Aggregates, the StatusCode signifies the conditions under which a value or Event was stored, serving as a key indicator of its usability Given the characteristics of aggregated data, it is essential to provide clients with additional information beyond the basic quality and call result code, such as whether the result was Interpolated and if all data inputs used in calculations were of Good quality.

Table 8 lists codes with Bad severity, indicating a failure, while Table 9 presents codes with Uncertain severity, which signify that values were obtained under sub-normal conditions These codes are specific to OPC UA Aggregates and complement the codes applicable to all data types, as defined in IEC 62541-4, IEC 62541-8, and IEC 62541-11.

Table 8 – Bad operation level result codes

Bad_AggregateListMismatch The requested number of Aggregates does not match the requested number of

NodeIds When multiple Aggregates are requested, a corresponding NodeId is required for each AggregateFunction

Bad_AggregateNotSupported The requested AggregateFunction is not supported by the Server for the specified

Bad_AggregateInvalidInputs The Aggregate value could not be derived due to invalid data inputs, errors attempting to perform data conversions or similar situations

Table 9 – Uncertain operation level result codes

Uncertain_DataSubNormal The value is derived from raw values and has less than the required number of Good values.

Aggregate Information Bits 1 9

When obtaining Aggregate data, specific bits are activated to indicate the source of the data value, influencing its usage by the client Table 10 outlines the bit settings that reveal the data location, distinguishing whether the value is stored in the underlying data repository or derived from data aggregation It is important to note that these bits are mutually exclusive.

When interpolated data is requested and a corresponding raw value exists for the specified timestamp, the Server must indicate this by setting the 'Raw' bit in the StatusCode of that value.

Table 1 1 lists the bit settings which indicate additional important information about the data values returned

Partial A calculated value that is not based on a complete interval See 5.3.3.2

Extra Data If a Server chooses to set this bit, it indicates that a Raw data value supersedes other data at the same timestamp

Multiple Values Multiple values match the Aggregate criteria (i.e multiple minimum values or multiple worst quality at different timestamps within the same ProcessingInterval)

The conditions under which these information bits are set depend on how the data has been requested and state of the underlying data repository

Partial bits signify that the interval is incomplete, meaning a client may obtain a different Aggregate value if it re-fetches the interval using the same parameters.

The Partial Bit will be set in the following examples:

In this example, the first recorded point in the collection is 1:01:10, while the last recorded point is 1:31:20 Although older data may exist, it is currently unavailable or offline during the query Additionally, newer data may be accessible but has not yet been stored in the history collection.

• The interval that overlaps the beginning of the history collection If the start time is

The analysis begins at 1:00:00 and concludes at 1:10:00, with a 2-minute interval The initial interval will have a partial bit set due to the absence of data for the first 70 seconds A partial bit is consistently set for the first interval containing data if its start time occurs before the first data point in the collection Intervals preceding the one with a partial bit will be marked as Bad_NoData.

The latest point in the history collection is 1:31:20, indicating that the historian is still operational A 6-minute interval starting at 1:30:00 will have the partial bit set, as the historian is anticipating data that has not yet arrived This partial bit is consistently set for the last interval with data if its end time exceeds the last recorded data value Any intervals that occur entirely after the one with the partial bit will be marked as Bad_NoData Additionally, for Aggregates with extrapolation, the partial bit may also be activated; refer to the specific characteristics of the Aggregate for further information.

If the start and end times do not create an even interval and there is extra data beyond the end time, the final interval will contain a partial bit.

The time range spans from 1:00:00 to 1:20:00, with intervals of 6 minutes, except for the final interval, which lasts only 2 minutes and will have the Partial Bit set In this scenario, extrapolation is not applicable The Partial Bit may be activated alongside the Calculated Bit, as the Calculated Bit is consistently set for the specific Aggregate.

Aggregate details

General

Section 5.4 outlines the requirements and behavior for OPC UA Servers that support Aggregates, aiming to standardize these Aggregates for predictable computation results and clear understanding For users needing custom functionality, it is recommended to create vendor-defined custom Aggregates.

Standard Aggregates should exhibit consistent behavior, ensuring that each Aggregate responds similarly under comparable input parameters, raw data, and boundary conditions Whenever feasible, Aggregates must handle inputs and preconditions in a uniform manner.

Subclause 5.4 is divided up into two parts Subclause 5.4.2 deals with Aggregate characteristics and behaviour that are common to all Aggregates Subclause 5.4.3 deals with the characteristics and behaviour of Aggregates that are aggregate-specific.

Common characteristics

Subclause 5.4.2 deals with Aggregate characteristics and behaviour that are common to all Aggregates

To read Historical Aggregates, OPC clients shall specify three time parameters:

– startTime (Start) – endTime (End) – ProcessingInterval (Int)

The OPC Server utilizes three parameters to create a sequence of time intervals and compute an Aggregate for each interval According to Subclause 5.4.2.2, the generated time intervals are determined by these parameters, with Table 1 2 detailing the intervals for each Start and End time combination The range is defined as |End – Start|, and all Aggregates return a timestamp marking the start of the interval, unless specified otherwise for a particular Aggregate.

Table 1 2 – History Aggregate interval information

Start/End Time Interval Resulting intervals

Start = End Int = Anything No intervals Returns a Bad_InvalidArgument StatusCode, regardless of whether there is data at the specified time or not

Start < End Int = 0 or Int ≥ Range One interval, starting at Start and ending at End Includes Start, excludes

Start < End Int ≠ 0, Int < Range, Int divides

Range evenly Range/Int intervals Intervals are [Start, Start + Int), [Start + Int, Start + 2 x

When the start value is less than the end value and the interval does not evenly divide the range, the number of intervals can be calculated as the ceiling of the range divided by the interval These intervals are defined as follows: [Start, Start + Int), [Start + Int, Start + 2 x Int), , up to [Start + (floor(Range/Int) - 1) x Int, Start + floor(Range/Int) x Int).

In other words, the last interval contains the “rest” that remains in the range after taking away Range/Int intervals of size Int

Start > End Int = 0 or Int ≥ Range One interval, starting at Start and ending at End Includes Start, excludes

End, i.e.,[Start, End) a Start > End Int ≠ 0, Int < Range, Int divides

Range evenly Range/Int intervals Intervals are [Start, Start- Int), [Start– Int, Start – 2 x

Int), , [End + Int, End) a Start > End Int ≠ 0, Int < Range, Int does not divide Range evenly Range/Int intervals Intervals are [Start, Start – Int), [Start –Int,Start – 2 x

Int), , [Start – ( Range/Int – 1) x Int , Start – Range/Int x Int), [Start –

The final interval represents the remaining "rest" in the range after subtracting $\lfloor \frac{\text{Range}}{\text{Int}} \rfloor$ intervals of size Int, beginning from Start In this scenario, time is perceived as moving backward across the intervals.

When time flows backwards, the calculation of all Aggregates remains consistent with forward time calculations, except that 'early time' is excluded from the interval while 'late time' is included Typically, this results in identical values, with timestamps shifted by one ProcessingInterval For instance, the value at T = n in forward time corresponds to the value at T = n + 1 in backward time.

Note that when determining Aggregates with MonitoredItem, the interval is simply the ProcessingInterval parameter as defined in the AggregateFilter structure See IEC 62541 -4 for more details

Table 1.3 details the valid DataTypes for each Aggregate, indicating that some are designed specifically for numeric data types, such as integers and real/floating point numbers, while others cater to digital data types like Boolean or enumerations It is important to note that dates, strings, and arrays are not supported Additionally, certain Aggregates may yield results with a different DataType than those utilized in the calculation The table also specifies the DataType returned for each Aggregate.

Table 1 3 – Standard History Aggregate Data Type information

BrowseName Valid Data Type ResultData Type

Interpolative Numeric Raw Data Type

Minimum Numeric Raw data type

Maximum Numeric Raw data type

MinimumActualTime Numeric Raw data type

MaximumActualTime Numeric Raw data type

Range Numeric Raw data type

Minimum2 Numeric Raw data type

Maximum2 Numeric Raw data type

MinimumActualTime2 Numeric Raw data type

MaximumActualTime2 Numeric Raw data type

Range2 Numeric Raw data type

DurationInStateZero Numeric or Boolean Duration

DurationInStateNonZero Numeric or Boolean Duration

NumberOfTransitions Numeric or Boolean Integer

Start All Raw data type

End All Raw data type

Delta Numeric Raw data type

StartBound All Raw data type

EndBound All Raw data type

DeltaBounds Numeric Raw data type

The following issues may come up when calculating Aggregates that include time as part of the calculation

All Aggregate calculations include the startTime but exclude the endTime However, it may be necessary to return an Interpolated End Bound for an Interval with a timestamp within it Servers should utilize the time immediately before endTime, with the Server's time resolution determining the exact value For instance, if the endTime is 12:01:00 and the time resolution is 1 second, the EffectiveEndTime would be 12:00:59 Conversely, if the Server's time resolution is 1 millisecond, the EffectiveEndTime would be 12:00:59.999.

When a single data point exists within the interval and coincides with the StartTime, the time duration utilized for calculations is equivalent to one unit of the server's time resolution.

Specific Aggregated data handling

When accessing aggregated data using the HistoryRead or the CreateMonitoredItems Service, the following rules are used to handle specific Aggregate use cases

When the ProcessingInterval is set to 0, the Server generates a single Aggregate value for the entire specified time range, enabling Aggregates over extensive periods Any value with a timestamp matching endTime will be excluded from this Aggregate, similar to its exclusion from an interval ending at that time If a ProcessingInterval of 0 is provided in the MonitoredItemFilter, it will be adjusted to an appropriate non-zero value.

The timestamp returned with the Aggregate shall be the time at the beginning of the interval, except where the Aggregate specifies a different timestamp

If a requested timestamp differs from the source timestamp, the operation will yield a Bad_TimestampToReturnInvalid StatusCode Additionally, if a requested timestamp is unsupported for a HistoricalDataNode, the operation will return a Bad_TimestampNotSupported StatusCode For MonitoredItems, the Server will not provide past data if the requested timestamp is not supported by the history collection.

StatusCodes for an Aggregate value consider the values used in their calculation Additionally, the configuration parameters PercentDataGood and PercentDataBad enable the client to influence this calculation, provided the Server supports it.

Aggregates that operate on raw values, such as averages, calculate by counting these values When an Aggregate can return a Bounding Value, these values are included in the count for the StatusCode computation Additionally, if an Aggregate performs time-weighted calculations, like TimeAverage or TimeAverage2, the StatusCode calculation will also reflect this time weighting.

To calculate time-weighted StatusCodes, each interval must be segmented into regions of Good or Bad data This segmentation involves determining the bounding values for each interval, which vary based on the type of Aggregate used.

When the parameter TreatUncertainAsBad is set to False, uncertain regions are considered part of the good regions in ratio calculations Conversely, if TreatUncertainAsBad is True, uncertain regions are classified as bad The StatusCode remains uncertain during the result calculation If there are no bad regions in an interval, its StatusCode is deemed good For intervals with bad regions, the total duration of these regions is calculated, divided by the interval's width, and the resulting ratio is compared to the PercentDataBad parameter If this ratio meets or exceeds the PercentDataBad threshold, the interval's StatusCode is classified as bad For intervals that are not classified as bad, the total duration of good regions is calculated, divided by the interval's width, and compared to the PercentDataGood parameter If this ratio meets or exceeds the PercentDataGood threshold, the interval's StatusCode is classified as good If neither ratio applies, the interval is labeled as Uncertain_DataSubNormal.

If an interval contains no data and falls within the range of [StartOfData, EndOfData], the StatusCodes for that interval will be Bad_NoData, provided the Aggregate return data type is raw data, unless specified otherwise by the Aggregate's characteristics.

The width of an interval is determined by the ProcessingInterval, except when it is a partial interval, indicated by the Partial bit being set In such instances, the width corresponds to the time utilized in calculating the partial interval.

Subclauses 5.4.3.2.2 and 5.4.3.2.3 include diagrams that illustrate a request and data series The colour of the time axis indicates the status for different regions Red indicates Bad, green indicates Good and orange indicates Uncertain These examples assume TreatUncertainAsBad = False

5.4.3.2.2 Sloped Interpolation and Simple Bounding Values

Figure 2 displays a data series for the Variable with Stepped set to False, utilizing Simple Bounding Values for aggregation The request being processed has a Start Time occurring before the initial point in the series, and its End Time does not align with an integer multiple of the ProcessingInterval.

Figure 2 – Variable with Stepped = False and Simple Bounding Values

The first interval has four regions:

• the period before the first data point;

• the period between the first and second where SlopedInterpolation can be used;

• the period between the second and third point where SteppedInterpolation is used;

• the period after the Bad point where no data exists

A region is Uncertain if a region ends in a Bad or Uncertain value and SlopedInterpolation is used The end point has no effect on the region if SteppedInterpolation is used

The second interval has three regions:

• the period before the first Good data point where no data exists;

• the period between the first and second where SlopedInterpolation can be used;

• the period between the second point and the bound calculated with SlopedInterpolation

Start Time End Time IEC

• the period between the first point and an interpolated point that falls on the end time;

• the period after the end time which is ignored

In this region, data beyond the specified end time is disregarded However, if sloped interpolation is applied and the point following the endpoint is uncertain, the area between the last data point and the end time will also be deemed uncertain.

5.4.3.2.3 Stepped Interpolation and Interpolated Bounding Values

Figure 3 displays a data series for the Variable with Stepped set to True, utilizing Interpolated Bounding Values for aggregation The request being processed has a Start Time occurring before the initial point in the series, and its End Time does not align with an integer multiple of the ProcessingInterval.

Figure 3 – Variable with Stepped = True and Interpolated Bounding Values

The first interval has three regions:

• the period before the first data point;

• the period between the first and second where SteppedInterpolation is used;

• the period between the second and the interpolated end bound

The drawback of the interpolated end bound is that it overlooks negative values, leading to the creation of uncertain regions If SlopedInterpolation were applied, the uncertain region would initiate at the second point; however, in this scenario, it only begins when the first bad value is disregarded.

The second interval has three regions:

• the period between the start bound and the first data point;

• the period between the first and second where SteppedInterpolation is used;

• the period between the second and the interpolated end bound

The third interval has three regions:

• the period between the interpolated bound and the first data point;

• the period between the first point and an interpolated point that falls on the end time;

• the period after the end time which is ignored

This is a partial region and the data after the end time is not used

Start Time End Time IEC

Subclause 5.4.3.3 deals with Aggregate specific characteristics and behaviour that is specific to a particular Aggregate

Each subclause has a table which formally expresses the Aggregate behaviour (including any exceptions) The meaning of each of the fields in the table is described in Table 1 4

• The first column is the common name for the item

• The second column includes a description of the item and a list of the valid selections with for the item including a description of each selection

• The second part of the table describes how the status associated with the Aggregate calculation is computed

The final section of the table outlines the expected behaviors of the Aggregate in various common special cases, necessitating detailed text descriptions rather than a simple list of valid selections.

Type The type of Aggregate

Interpolated: See definition for I nterpolated

Calculated: Computed from defined calculation

Raw: Selects a raw value from within an interval

Data Type The data type of the result

Use Bounds How the Aggregate deals with bounds

None: Bounds do not apply to the Aggregate

Timestamp What is the time stamp of the resulting Aggregate value:

startTime: The time at the start of the interval endTime: The time at the end of the interval

Raw: The time associated with a value in the interval

Calculation Method How the status code is calculated:

PercentValues: Based on percentage of value counts

PercentTime: Based on percentage of time interval

Custom: Specific to the Aggregate (description included)

Partial For partial intervals does the Aggregate set this bit

It may also describe any special cases for setting this bit Calculated Describes the usage of the calculated bit

Set Always: The bit is always set

Set Sometimes: The bit is sometimes set (describes when)

Not Set: The bit is never set

I nterpolated Describes the usage of the interpolated bit

Set Always: The bit is always set

Set Sometimes: The bit is sometimes set (describes when)

Not Set: The bit is never set

Raw Describes the usage of the Raw bit

Set Always: The bit is always set

Set Sometimes: The bit is sometimes set (describes when)

Not Set: The bit is never set

Multi Value Describes the usage of the multi value bit

Set Sometimes: The bit is used (see IEC 62541 -1 1 )

Not Set: The bit is never set

Status Code Common Special Cases

Before Start of Data If the entire interval is before the start of data

After End of Data If the entire interval is after the end of data (as determined by the Historian)

Start Bound Not Found If the starting bound is not found for the earliest interval and it is not partial, then what, if any, special processing should be done

End Bound Not Found If the ending bound is not found for the latest interval and it is not partial, then what, if any, special processing should be done

If the bounding value is deemed bad, specific processing measures must be implemented Similarly, when the bounding value is uncertain, it is essential to determine the appropriate special processing required.

The Interpolative Aggregate defined in Table 1 5 returns the Interpolated Bounding Value (see 3.1 8) for the startTime of each interval

Interpolative

Average

TimeAverage

TimeAverage2

Total

Total2

Minimum

Maximum

DurationInStateZero

NumberOfTransitions

Start

End

StartBound

EndBound

Delta

DeltaBounds

DurationGood

DurationBad

PercentBad

WorstQuality

WorstQuality2

StandardDeviationSample

VarianceSample

StandardDeviationPopulation

VariancePopulation

Ngày đăng: 17/04/2023, 11:50

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w