1624 "Internal scan cycle wait time" must be part of transfer time
Created: 29 Mar 2018
Part: Part 10 (2012; Edition 2)
Issue: I totally disagree with phrase “The application time typically is the sum of internal scan cycle wait time and the actual logic processing time”. I believe that internal scan cycle wait time must be part of transfer time. I believe that application time is the actual logic processing time and nothing more. Standard gives no explanation why the scan cycle wait time is considered as part of application time, not transfer time.
The existing method of subtraction of scan cycle and scan cycle/2 gives UNREASONABLE advantage to devices with scan cycle and unreasonable disadvantage to other devices.
In UCAIUG certificates now we can see only times “compensated for scan delays if any”. We can’t see really measured values. In random measurements most important values are average time and maximum time. In certificates there are just (average time minus unknown X/2) and (maximum time minus unknown X)
Proposal: Change text to “Application time is the actual logic processing time. In case of ping-pong it is set it to zero”
Change formulas for all (average, maximum, minimum) application time = 0, transfer time = roundtrip time
I understand that you tried to separate networking part and protection part of a device and to measure just networking part. But unfortunately, I think that the result of this attempt is not satisfying.
First of all, if processing of network packets is quick but protection logic is slow, quickness of network processing is useless.
Second, it is not honest. Let’s take 2 devices. 1st device has tmin = 3ms, tavg = 3,5ms and tmax = 4ms. 2nd device has tmin = 0,5ms, tavg = 6,5ms and tmax = 12,5 ms. After “compensation” there is tmin = 3ms, tavg = 3ms and tmax = 3ms for the 1st device and tmin = 0,5ms, tavg = 0,5ms and tmax = 0,5 ms for the 2nd device. Without “compensation” the 1st device is better, after “compensation” the 2nd is better.
Third point, maximum time – is just one measurement and is subject to random fluctuations. For example, by accident one message is delayed and instead of times (tmin = 0,5ms, tavg = 6,5ms and tmax = 12,5 ms) there are times (tmin = 0,5ms, tavg = 6,5ms and tmax = 18,5 ms). After “compensation” tavg = -2,5ms, i.e. it gets negative value.
Last point, I don’t like the UCAIUG certificate itself. It is absolutely impossible to know from it whether it contains raw measured data or “compensated”. If the data is “compensated” it is absolutely impossible to know what was the value of “scan cycle”.
By the way I never heard of delaying of detection of event, only about delaying of reaction. With time-overcurrent protection it is internal logic of algorithm. I can’t call it “intentional delay of detection”
If individual data objects have different scan cycle then they should be measured separately. And for each type of data objects a separate certificate should be issued.
|17 Apr 18
Clause 8.2.1 defines transfer time as the time from DETECTION of an event by the application process at one device to the DELIVERY of the event to the application process at a second device. One reason for this definition is to allow device algorithms to intentionally delay the detection of an event without being penalized (for example with time-overcurrent protection).
The issue is "how to compensate for the time difference between a physical event occurring and the detection of that event by the application?". The method outlined in 8.2 accomplishes this compensation.
I believe your issue is that transfer time does NOT include latencies due to scan times. "Worst-case latency" could be defined as the sum of transfer time plus scan_time and "average latency" could be defined as transfer time plus 0.5*scan_time. Note that "scan_time" may differ for individual data objects within a device.
|30 Mar 18
Privacy | Contact | Disclaimer
Tissue DB v. 126.96.36.199