4.3 Analytic Development of Availability and Maintainability in Engineering Design Several techniques are identified for availability and maintainability prediction, as-sessment and eval
Trang 1Average output rate= [1 × (0.95 × 0.9 × 0.9)]
+ [0.5 × (0.95 × 0.9 × 0.1)]
+ [0.5 × (0.95 × 0.1 × 0.9)]
= 0.7695 + 0.04275 + 0.04275
= 0.855
= 85.5%
It must be noted that this average output rate is expressed as a percentage of the pos-sible output that can be achieved at maximum design capacity and 100% utilisation
It is thus an expression of the output capability of the plant as a whole, depending
on the percentage capacity for each state of the plant, as a result of the states of each
individual system, multiplied by its availability.
The above example of the simple three-system plant is insightful as an
introduc-tion to a plant having several states in which outages are regarded as full outages If, however, the system outages are such that, over a specific period of time T , the sys-tems could experience full outages as well as partial outages that are limited to 50%
of system output, then a table similar to the previous state matrix can be developed.
To calculate the expected or average output rate of the plant over an operating
period T , the percentage capacity (% of MDC) for each state during the shift pe-riod T is multiplied by the equivalent availability of each system The partial state matrix given in Table 4.2 presents the possible states of each system over a period
of time T These states are either 100% down (full outage) or 50% down (partial
outage)
Table 4.2 Double turbine/boiler generating plant partial state matrix
Period T Period T In period T In period T % MDC
13 Down 50% Down 50% Down 100% 25%
15 Down 50% Down 100% Down 100% 0%
16 Down 100% Down 100% Down 100% 0%
Trang 2If, for simplicity, the likelihood and duration of each state in the above partial state matrix table is considered to be equal, and each state could occur only once
in the operating period T , then according to Eq (4.141) given above, the equivalent availability of each system can be calculated as:
EA=∑[(To) · n(MDC)]
T · MDC .
In this case:
To = operational period for each state
T = total operating period or cycle
n (MDC) = capacity as % of MDC for each system
n = fraction of process output
MDC = maximum demand capacity
Thus, the equivalent availability for the boiler in period T (i.e working down the
second column of Table 4.2), and using Eq (4.141), can be calculated as:
EABoiler= (1.5)[5 × 1 + 0.5 + 0 + 8 × 0.5 + 0]
24× 1
= (1.5)9.5 24
= 0.59375 = 59.375% Similarly, the equivalent availability for #1 turbine and #2 turbine can also be
cal-culated
The equivalent availability for the #1 turbine in period T :
(i.e working down the third column of Table 4.2)
EA#1 Turbine =(1.5)[7 × 1 + 4 × 0.5 + 5 × 0]
24× 1
=(1.5)9 24
= 0.5625 = 56.25% The equivalent availability for the #2 turbine in period T :
(i.e working down the fourth column of Table 4.2)
EA#2 Turbine =(1.5)[7 × 1 + 4 × 0.5 + 5 × 0]
24× 1
=(1.5)9 24
= 0.5625 = 56.25%
Trang 3With the system equivalent availabilities for the boiler, #1 turbine and #2 turbine
now calculated from all the possible partial states (up, 50% down, or 100% down), what would be the expected or average plant output rate when the equivalent avail-ability for the boiler is 59.375% and the equivalent availavail-ability for the turbine gen-erators are 56.25% each?
Taking into consideration states with reduced utilisation as a result of partial outages, the expected or average plant output rate is calculated as:
Average plant output rate with partial outages= Σ (capacity of plant state at full and partial utilisation of systems that are operational × availability of each integrated system)
= [1.0 × (0.59375 × 0.5625 × 0.5625)] + [0.75 × (0.59375 × 0.5625 × 0.5625)]
+ [0.5 × (0.59375 × 0.5625 × 0.4375)] + [0.75 × (0.59375 × 0.5625 × 0.5625)] + [0.5 × (0.59375 × 0.4375 × 0.5625)] + [0.5 × (0.59375 × 0.5625 × 0.5625)]
+ [0.5 × (0.59375 × 0.5625 × 0.5625)] + [0.5 × (0.59375 × 0.5625 × 0.4375)]
+ [0.5 × (0.59375 × 0.5625 × 0.5625)] + [0.5 × (0.59375 × 0.4375 × 0.5625)]
+ [0.5 × (0.59375 × 0.5625 × 0.5625)] + [0.25 × (0.59375 × 0.5625 × 0.4375)] + [0.25 × (0.59375 × 0.4375 × 0.5625)]
= 0.18787 + (2 × 0.14090) + (4 × 0.09393) + (4 × 0.07306) + (2 × 0.04697)
= 0.18787 + 0.2818 + 0.37572 + 0.29224 + 0.09394
= 85.5%
The expected or average plant output rate, taking into consideration states with
re-duced utilisation as a result of partial outages, is 85.5%.
4.3 Analytic Development of Availability and Maintainability
in Engineering Design
Several techniques are identified for availability and maintainability prediction, as-sessment and evaluation, in the conceptual, preliminary and detail design phases respectively As with the analytic development of reliability and performance of Sect 3.3, only certain of the availability and maintainability techniques have been considered for further development This approach is adopted on the basis of the transformational capabilities of the techniques in developing intelligent computer
automated methodology using optimisation algorithms (OA) These optimisation algorithms should ideally be suitable for application in artificial intelligence-based (AIB) modelling, in which development of knowledge-based expert systems within
a blackboard model can be applied in determining the integrity of engineering de-sign Furthermore, the AIB model must be suited to applied concurrent engineering design in an integrated collaborative design environment in which automated
con-tinual design reviews may be conducted during the engineering design process by remotely located design groups communicating via the internet
Trang 44.3.1 Analytic Development of Availability and Maintainability Prediction in Conceptual Design
A technique selected for further development as a tool for availability and main-tainability prediction in determining the integrity of engineering design during the conceptual design phase is modelling based on measures of system performance.
This technique has already been considered in part for reliability prediction in
Sect 3.3.1, and needs to be expanded to include prediction of reliability as well
as inherent availability and maintainability System performance analysis through the technique of simulation modelling is also considered, specifically for prediction
of system characteristics that affect system availability Furthermore, the technique
of robust design is selected for its preferred application in decision-making about
engineering design integrity, particularly in considering the various uncertainties involved in system performance simulation modelling
Monte Carlo simulation is used to propagate these uncertainties in the
applica-tion of simulaapplica-tion models in a collaborative engineering design environment The
techniques selected for availability and maintainability prediction in the conceptual design phase are thus considered under the following topics:
i System performance measures and limits of capability
ii System performance analysis and simulation modelling
iii Uncertainty in system performance simulation modelling.
4.3.1.1 System Performance Measures and Limits of Capability
Referring back to Sect 3.3.1, it was pointed out that, instead of using actual per-formance values such as temperatures, pressures, etc., it is more meaningful to use
the proximity of the actual performance value to the limit of capability of the item
of equipment In engineering design review, the proximity of performance to a limit
closely relates to a measure of the item’s safety margin, which could indicate the
need for design changes or selecting alternate systems Non-dimensional numeri-cal values for system performance may be obtained by determining the limits of
capability, Cmaxand Cmin, with respect to the performance parameters for system in-tegrity (i.e reliability, availability, maintainability and safety) The nominal range of integrity values for which the system is designed (i.e 95 to 98% reliability at 80 to 85% availability) must also be specified Thus, a set of data points are obtained for each item of equipment with respect to the relevant performance parameters, to be
entered into a parameter performance matrix.
a) Performance Parameters for System Integrity
For predicting system availability, the performance measures for reliability and
maintainability are estimated, and the inherent availability determined from these
Trang 5measures As indicated previously, system reliability can be predicted by estimating the mean time between failures (MTBF), and maintainability performance can be predicted by estimating the mean time to repair (MTTR)
Inherent availability, A i, can then be predicted according to:
In the case of reliability and maintainability, there are no operating limits of
capa-bility but, instead, a prediction of engineering design performance relating to MTBF and MTTR Data points for the parameter performance matrix can be obtained
through expert judgement of system reliability by estimating the mean time between
failures (MTBF), and of maintainability performance by estimating the mean time
to repair (MTTR) of critical failures (Booker et al 2000) (Refer to Sect 3.3.3.4 dealing with expert judgement as data.)
A reliability data point x i jcan be generated from the predicted MTBF(R), a
max-imum acceptable MTBF(Rmax), and a minimum acceptable MTBF(Rmin), where:
x i j=(R − Rmin) × 10
Rmax− Rmin
(4.149)
Similarly, a maintainability data point x i j can be generated from the predicted
MTTR(M), a minimum acceptable MTTR (Mmin), and a maximum acceptable MTTR(Mmax), where:
x i j=(Mmax− M) × 10
Mmax− Mmin
(4.150)
b) Analysis of the Parameter Profile Matrix
The performance measures of a system can be described in matrix form in a pa-rameter profile matrix (Thompson et al 1998) The matrix is compiled containing
data points relating to all salient parameters that describe a system’s performance The rows and columns of the matrix can be analysed in order to predict the charac-teristics of the designed system Figure 3.22 is reproduced below as Fig 4.13 with
a change to the column heading from process systems to equipment items, as a single
system is being considered
Consider one row of the matrix Each data point x i jrefers to a single performance parameter; looking along a row reveals whether the system design is consistently good with respect to this parameter for all the system’s equipment items, or whether there is a variable performance For a system with a large number of equipment items (in other words, a high level of complexity), a good system design should
have a high mean and a low standard deviation of x i j scores for each parameter These scores are calculated as indicated in Figs 3.23, 3.24 and 3.25 of Chap 3,
Sect 3.3.1 Furthermore, a parameter performance index (PPI) that constitutes an
Trang 6Equipment items
Fig 4.13 Parameter profile matrix
analysis of the rows of the parameter profile matrix can be calculated(Thompson
et al 1998):
PPI= n
n
∑
j=1
1/c i j
−1
(4.151)
where n is the number of design alternatives.
The inverse method of calculation of an overall score is advantageous when the range of scores is 0 to 10, as it highlights low scores, whereas a straightforward addition of scores may not reveal a low score if there are high scores in the group However, the inverse calculation method is less sensitive to error than a multiplica-tion method The numerical value of PPI lies in the range 0 to 10, no matter how many data points are included in the calculation Thus, a comparison can be made to judge whether there is acceptable overall performance with respect to all the param-eters, or whether the system design is weak in any respect—particularly concerning the parameters of reliability, inherent availability, and maintainability
A similar calculation to the parameter performance index can be made for each column of the parameter profile matrix, whereby an equipment or device perfor-mance index (DPI) is calculated as (Thompson et al 1998):
DPI= m
m
∑
j=1
1/c i j
−1
(4.152)
where m is the number of performance parameters relating to the equipment item of column j.
A comparison of DPIs reveals those equipment items that are contributing less
to the overall performance of the system For an individual equipment item, a good
design is a high mean value of the x i j scores with a low standard deviation This system performance prediction method is intended for complex systems compris-ing many sub-systems Overall system performance can be quite efficiently de-termined, as a wide range of system performance parameters can be included in the PPI and DPI indices However, in the case of reliability, inherent availability, and maintainability, only the two parameters MTTR and MTBF are included for prediction, as indicated in Eq (4.148) From an engineering design integrity point
of view, the method collates relevant design integrity data (i.e reliability, inherent
Trang 7availability, and maintainability) obtained from expert judgement predictions, and compares these with design performance criteria, requirements and expectations
c) The Design Checklist
There are many qualitative factors that influence reliability, inherent availability,
and maintainability Some are closely related to operability, and there are no clear demarcation lines In order to expand design integrity prediction, a study of the many factors that influence these parameters must be carried out An initial list is
first derived in the form of a design checklist The results of previous research into
reliability, availability and maintainability problems are carefully considered when devising the checklist questions (McKinney et al 1989)
The checklist is intended for general application, and includes factors affecting design operability In many cases, there will be questions that do not apply to the design being reviewed, which are then omitted
The questions can be presented to the analyst in the form of a specific knowledge-based expert system within an artificial intelligence-knowledge-based (AIB) blackboard model
for design review during the design process Results are presented in a format that enables design review teams to collaboratively make reasonable design judgements This is important, in view of the fact that one design team may not carry out the complete design of a particular system as a result of multidisciplinary engineering design requirements, and the design review teams may not all be grouped at one
location, prompting the need for collaborative engineering design This scenario is
considered later in greater detail in accounting for various uncertainties involved
in system performance simulation modelling for engineering design, utilising the
robust design technique Furthermore, knowledge-based expert systems within AIB
blackboard models are given in Sect 3.4, Sect 4.4, and Sect 5.4
A typical example of a checklist question set, extending from conceptual to schematic design, is the following:
Question set Is pressure release and drainage (including purging and venting)
pro-vided?
Are purge points considered? If there are no purge points, then this may mean drainage via some or other means that could increase exposure to maintenance per-sonnel requiring the need for protection
i Purge points not present, requiring some other means 0
ii Purge points present but accessibility will be poor 1
iii Purge points present and accessibility will be good 2
A series of questions is posed for each design, and each answer is given a score 0,
1 or 2 The total score is obtained by the summation of the scores, calculated as
a percentage of the total of all the relevant questions Therefore, questions that do not apply are removed from the analysis The objective is to obtain overall design integrity ratings for the process and for each device, in order to highlight weak de-sign integrity considerations A high percentage score indicates good performance
Trang 8where the scores complement the MTTR and MTBF calculations Where there is
a mismatch—for example, a high estimated MTBF but a low reliability score, or
a high estimated MTTR but low maintainability score—then further design investi-gation is required
d) Integrity Prediction of Common Items of Equipment
The prediction method is intended for those process systems that comprise many well-known items (as the design is still in its conceptual phase) It could be expected that certain items of equipment may exhibit common maintainability requirements
In order to save time in data estimates, typical maintenance requirements for com-mon devices are prepared as data sheets
Thus, if a centrifugal pump is selected, then a common data sheet would be avail-able for this item The data sheet can be reviewed and accepted as it is, or it can be edited to reflect certain particular circumstances, or the checklist can be completed from a blank form to compile a new set of data for that item In addition to the re-sponses to questions for each individual item, a response to each particular question regarding total systems integration may be considered for all relevant items For ex-ample, in the case of maintainability, the question might refer to the ability to detect
a critical failure condition If the response to this question is estimated at 60%, then
it would suggest that 40% of all items for which the question is relevant would re-main undetected It is thus possible to review the responses to questions across all the integrated systems, to gain an understanding of the integrity of the conceptual design as a whole
e) Design Reviews of Performance Parameters for System Integrity
Design review practices can take many forms At the lowest level, they consist of
an examination of drawings before manufacture begins More comprehensive de-sign reviews include a review at different phases of the engineering dede-sign process: the specification (design requirements), conceptual design, schematic or prelimi-nary design, and detail design There are particular techniques that the design re-view team implement at these different phases, according to their suitability The method proposed here is intended for use when very little detail of the equipment
is known In particular, it is intended for use during the conceptual design phase in preparation for the follow-up schematic design phase when systems are synthesised using manufactured equipment Many engineered installations are designed in this
way Design reviews can be quantitative or qualitative The advantage of quantita-tive reviews is that they present clearly a justification for a decision that a particular
design is either satisfactory or unsatisfactory with respect to essential performance specifications
Therefore, if it is possible, there are advantages to a quantitative review as early
as possible in the engineering design process A design review should add value
Trang 9to each design Design calculation checks are taken care of by all good, traditional design practices; however, a good design review will be repaid by reduced com-missioning and start-up problems and better ramp-up operations The design review method proposed here seeks to provide a quantitative evaluation that adds value
to engineering design by integrating process performance parameters such as mass flows, pressures and temperatures with reliability, availability and maintainability Performance data required are the same as those used to specify the equipment Therefore, there is no requirement for extra data in excess of those that would be available for process systems that comprise many well-known items The only ad-ditional requirement is a value judgement of acceptable and unacceptable MTBF and MTTR These data are then compiled into a parameter profile matrix using data points derived from the proximity of a required operating point to the performance limit of the equipment As indicated previously, the use of the proximity of the nom-inal design performance to a limit of equipment capability is similar to the concept
of a safety margin Similarly, the estimated MTBF and MTTR data points reflect the closeness of predicted performance to expectations Having compiled the ma-trix, analysis can be performed on particular variables for all items of equipment,
or on all variables for a particular item of equipment, yielding PPI and DPI indices respectively
On a large engineering design project, data can be generated by several design
teams, compiled and analysed using the AIB blackboard model for automated
de-sign reviews throughout the engineering dede-sign process MTBF and MTTR expecta-tions can be varied in a sensitivity analysis The computer automated methodology can highlight matrix cells with low scores and pick out performance variables and equipment that show poor performance Therefore, the data handling and calcula-tion aspects of the design verificacalcula-tion do not impose excessive requirements The flexibility of the approach, and the method of data point derivation are especially useful in process engineering enhancement projects Inevitably, there are instances when engineered installations are subject to modifications either during construc-tion or even after ramp-up (e.g there may be advantages in processing at different pressures and/or temperatures), necessitating review of the equipment performance data after design completion Implications of changes to temperature and pressure requirements can be readily explored, since the parameter profile analysis method will immediately identify when the performance of an item is in close proximity to
a limit
Furthermore, engineered installations may be required to process materials that are different to those that they were originally designed to process As the
equip-ment data are already available in the AIB blackboard model, all that is needed is to
input new process data for further analysis The time taken to set up the equipment database during a post-design review will be justified, since the data can be used throughout the life of the design Reliability, inherent availability, and maintainabil-ity are included in the parameter profile matrix evaluation by estimating MTBF and MTTR times, and then calculating the inherent availability In order to expand on these important variables and to explore the various factors that influence reliability, inherent availability, and maintainability, checklists are developed that may be used
Trang 10in different ways, either on their own or to complement the parameter profile anal-ysis In a design review, many of the questions may be answered to obtain a view
of a system’s integrity, or to obtain an overall view of the integrity of the integrated systems design Another use for the checklists would be in a process enhancement study whereby an existing engineered installation is audited to identify precisely the improvements required
f) Reliability and Maintainability Checklists
The checklists are designed for use by engineering designers in the case of a design review exercise, or by process engineers in the case of required post-design process
enhancements Although the question sets listed in the AIB blackboard models
pre-sented in Sects 3.4, 4.4 and 5.4 are somewhat different from the example checklists for reliability and maintainability, the relevant principles and intended use remain the same A segment of an AIB blackboard model Expert System tab page is given below, showing a particular question set
The following question sets indicate the general content of the checklists (Thompson
et al 1998)
Reliability checklist
Q1 Is the component a single unit, active redundant or stand-by redundant? Q2 Are the demands on the equipment short, medium or long pulses?
Q3 Does the duty cycle involve any thermal, mechanical or pressure shocks? Q4 Is the pH of the process fluid high or low?
Q5 Is the component used for high, medium or low flow rates?
Q6 Is the component in a physical situation where external corrosive attack can
occur from: open weather, other machinery being submerged?
Q7 Are solids present in the process fluid?
Q8 Are gases present?
Q9 Is the fluid of a viscosity that is likely to increase loads on the equipment?