2. Response deficit analysis
in wind farm performance monitoring
Prof Dr Peter J M Clive
Wednesday, 28 November 2017
3.
4.
5.
6.
7. • SCADA Time series data
– Statistics such as means and variances acquired over a
succession of contiguous averaging intervals, e.g. 10 minute
averages of wind speed, active power export, etc.
• SCADA Event data
– Instances of specific events recorded with details including
detection and reset times, duration, event code, and the values
of key parameters, e.g. alarm data
• SCADA Cumulative data
– Running totals of key quantities such as production, downtime,
time in service, etc.
• CMS data
– High frequency data for signal processing and comparison with
set points
Different kinds of data
7
8. Different kinds of data
• Data from individual wind turbines
– SCADA, CMS
• Sub-station data
• Point-of-sale meter data
• On site met mast data
– Permanent met mast
– Power performance assessment
reference mast
• Remote sensing data
– Nacelle mounted Lidar
– Wind profilers (Lidar, Sodar)
– Scanning Lidars
Understand the output
in terms of production
and status information
Understand the
incident wind resource
to which the wind
turbines are
responding
9. Different kinds of data
• Condition monitoring
– Acquisition of high frequency CMS signals
– Sensors installed on drive train components
– Accelerometers, strain gauges, oil particulate
counters, temperature sensors, etc.
– Signal processing, set points and thresholds
• Performance monitoring
– Uses routine operational SCADA data
– Accumulation of statistics
– Trends and anomalies detected
– Integration of time series and event data
– Robust with low incidence of false positives
10. Case studies
Response Deficit Analysis of SCADA data
• The plots illustrating the variation of one parameter (e.g.
active power) in response to variations in another (e.g.
wind speed or bearing temperature) cannot be individually
inspected cost-effectively;
• Response Deficit Analysis enables the statistical
characterization of these response curves so that a “graph
of graphs” can be produced that an analyst can interpret
instantly to identify deviant behavior in a timely focused
way that optimally leverages their experience and
expertise.
11. 1. Select two data tags that can be paired. For example:
• 10-minute average hub height wind speed and
• Concurrent 10-minute average active power
2. This allows the observed power curve to be compared to
a reference power curve
3. N.B. the same technique can be applied to any
relationship, such as
• RPM v. Pitch Angle,
• Drive-end v. Non-drive-end bearing temperature,
4. The data tag values exhibit a relationship (for example:
the power curve). One value varies in response to
variations in the other.
Response Deficit Analysis (RDA)
12. 5. Select a reference response. This could be
representative, typical, warranted, depending on why you
are undertaking RDA. For example:
• The warranted power curve
• The long term average observed power curve
• Power curve observed on average over a number of
turbines during the short term period under
investigation
• Some other reference considered typical or
representative
Response Deficit Analysis (RDA)
13. 6. Observe measured responses in groups of paired tags
(for example: grouped by turbine and period of time,
generating a measured power curve for each turbine for
the period in question).
7. Subtract the measured responses from the reference
response: these are the response deficits (for example:
subtract the reference from the measured power curve).
8. Chose metric generators. These are functions whose
value can be weighted by the response deficit (for
example: in the case of a power curve, these could be
functions of wind speed).
Response Deficit Analysis (RDA)
18. 9. Calculate the "performance" or "response" metrics.
• These are the average values of the metric generator
functions weighted by the response deficit.
• Calculate at least two.
• These can then be plotted against each other to
characterise the response relative to the reference for
the group of paired tags.
• This provides a “graph of graphs” where each point
represents one instance of the response under
investigation.
• Anomalous responses are immediately obvious.
Response Deficit Analysis (RDA)
19. 10. Normalise the metrics by a common "normalisation" metric
generator.
• Raise the normalisation metric to the order of each metric
divided by the order of the normalisation metric
• For example:
• Metric generator 1 is a 3rd order polynomial proportional to
the skewness of the response deficit,
• Metric generator 2 is a 4th order polynomial proportional to
the kurtosis of the response deficit
• Divide generator 1 by a 2nd order normalisation generator
(proportional to the variance of the response deficit) raised to
the power 3/2 and
• Divide generator 2 by the same normalisation generator
raised to the power 2 (=4/2).
Response Deficit Analysis (RDA)
20. 11.The results of applying metric generators provides response
deficit metrics that can be plotted to visualise the data,
creating a graph of graphs.
12.For example the metric obtained using generator 2 can be
plotted against the metric obtained using generator 1 in Step
10 above.
An example is shown in the next slide.
Response Deficit Analysis (RDA)
21. 19%of
AEP
RDA metric plot
Response Deficit Analysis
Inspection of performance metrics enables rapid identification of
anomalous performance in seconds or minutes
Anomalies
Main sequence
(Each point represents one turbine’s performance during one week)
Response Deficit Analysis (RDA)
25. Case studies
Sever underperformance that had gone
un-noticed for months was instantly
detected using Response Deficit Analysis
once SgurrTrend services were engaged.
A controller fault due to an incorrect set
point was causing production losses of
nearly 20%.
27. Case studies
Yield deficit analysis
Tower vibration occurs at a specific wind speed and
hence rotor rpm: rotor imbalance indicated, probably
due to poor pitch regulation in high shear, incurring
downtime and losses in production of around 1%, and
contributing to premature gearbox failure through high
torque variance
29. Case studies
Pitch misalignment is immediately
identified using SgurrTrend. The impact
of this fault is a reduction of 10% in
annual energy production (AEP)
31. Case studies
Turbine 1 Turbine 2
Wind turbine inter-comparison reveals anomalous or delinquent performance: in
this case a delayed cut-in costing 1% in production of the affected turbine,
WTG01, losing >15 MWh per month per turbine as a result
33. Case studies
A controller fault is immediately
identified using Response Deficit
Analysis: a premature cut-out is
costing 1% of AEP. This is corrected
by the installation of appropriate
firmware and controller settings.
34. 1st generation: extrapolation
• Mast mounted sensors and remote sensing vertical profilers
2nd generation: inference
• Inference of wind conditions from measurements in multiple location
using scanning devices
3rd generation: direct observation
• Wind parameters of interest are all directly observed within the entire
domain of interest
• Measurement is intuitive: all that is required to interpret the
measurement is knowledge of its purpose rather than instrument-
specific expertise
• Example: multiple synchronised lidars fulfil at least some of the
requirements of a 3rd generation system
Towards 3rd generation sensors
35. The IEA Wind Energy Task 32 is adopting a "use case" framework for
describing the application of lidar in wind energy assessments to ensure
well-documented measurement techniques applied in a manner that is
fit-for-purpose with the degree of consistency required for investor
confidence
A use case considers three things
• Data requirements: articulated without reference to the capabilities of
the possible methods that are available to fulfil them.
• Measurement method: there are multiple options available whose
suitability depends upon the data requirements that are being fulfilled.
• Situation: the performance of a particular method may depend upon
the circumstances in which it is deployed.
IEA Use Cases
36. Clifton, A. et al., IEA Wind Energy Task 32 Remote Sensing of Complex Flows by Doppler
Wind Lidar: Issues and Preliminary Recommendations, NREL, 2015
Measurement
method
Data acquisition
situation
Data
requirements
IEA Task 32 Lidar Use Cases
37. What
measurement
accuracy is
verified in this
situation?
What data
requirements arise in
this situation?
What
measurement
method fulfils
my data
requirements?
IEA Task 32 Lidar Use Cases
40. Conclusions
• Response Deficit Analysis is a general technique that can
be applied to any data in which relationships between
variables occur which can be compared to a reference.
• The difference between the observed and reference
relationships is the deficit
• Generate metrics from this deficit using functions in a
similar way to calculating statistical moments
• These metrics can be plotted against each other to
produce a “graph of graphs” amenable to rapid inspection
• Anomalous performance is made immediately obvious