5.128 Automated continual design review: component SBS • automatic checking of design constraints that affect the design’s integrity, allow-ing for alternatives to be considered in a co
Trang 1774 5 Safety and Risk in Engineering Design
Fig 5.126 Fuzzy logic for managing uncertain data
a theoretical overview of reliability, availability, maintainability and safety in engi-neering design—the methodology presented in this handbook
Plant analysis in the AIB blackboard is the working memory of the knowledge-based expert systems, consisting of a global database of facts relating to the integrity
of engineering design, which are used for establishing automated continual design reviews The basic aims of automated continual design reviews are to automatically
assess system requirements and allocations to ensure that the design specifications are complete; to automatically compare the design output against design specifica-tions; to automatically present the risks associated with a collaborative and continu-ous design effort; and to continually allow for decision-making in selecting the most suitable design amongst the current design solutions
Figures 5.128 and 5.129 illustrate the typical AIB blackboard format of an au-tomated continual design review Figure 5.128 shows the blackboard systems hier-archy navigation and selection format whereby critical components can be viewed with regard to their systems relationships
Figure 5.129 shows a typical criticality assessment of a component, based on condition and performance obtained from an FMECA analysis
The artificial intelligence blackboard model—overview Artificial
intelligence-based strategies for decision-making and, in particular, for decisions concerning the
Trang 2Fig 5.127 AIB blackboard model with plant analysis overview option
integrity of engineering design are centred around three approaches termed deter-ministic knowledge, probabilistic knowledge and possibilistic knowledge.
Deterministic knowledge, in engineering design integrity formulation, is based
on a well-defined systems structure and definition of the operational and physical functions of equipment, the usefulness of which depends on the ability to relate the information specifically to failure conditions (or failure modes) in identifying problems of equipment failure consequences
Probabilistic knowledge is gained mainly from a statistical analysis of the
prob-able occurrences of events, such as component failures, in order to predict the ex-pected occurrence of these events in the future to be able to design-out problems or
to implement some form of preventive action
Possibilistic knowledge focuses primarily on imprecision or uncertainty that is
intrinsic to equipment degradation Imprecision here is meant to express a sense of vagueness, rather than the lack of any knowledge at all about predicted equipment condition, particularly its physical condition In other words, possibilistic knowl-edge concerns the concept of ‘fuzziness’, and not ‘randomness’
The application of fuzzy logic expert systems focuses on the use of expert systems
technology and fuzzy logic to achieve intelligent computer automated methodology
to determine the integrity of engineering design The most important impact areas
of expert systems on the integrity of engineering design are:
Trang 3776 5 Safety and Risk in Engineering Design
Fig 5.128 Automated continual design review: component SBS
• automatic checking of design constraints that affect the design’s integrity,
allow-ing for alternatives to be considered in a collaborative design environment;
• automation of complex tasks and activities for determining design integrity
where expertise is specialised and technical;
• strategies for searching in the space of alternative designs, and monitoring of
progress towards the targets of achieving the required design integrity;
• integration of diverse knowledge sources in an AIB blackboard system, with
ex-pertise applied concurrently to the problem of ensuring design integrity;
• provision of intelligent computer automated methodology for determining the
integrity of engineering design through automated continual design reviews
5.4.2 Evaluation of Modelling Results
As previously indicated, blackboard systems consist mainly of a set of knowledge sources and a blackboard data structure A blackboard knowledge source is a highly
specialised, highly independent process that takes inputs from the blackboard data structure, performs a computation, and places the results of the computation back in
Trang 4Fig 5.129 Automated continual design review: component criticality
the blackboard data structure This blackboard data structure is a centralised global data structure partitioned in a hierarchical manner and used to represent the problem domain (in this case, the engineering design problem), and acts as a shared memory visible to all of the knowledge sources to allow intercommunication between the knowledge sources The blackboard data structure contains shared blackboard data objects and can be accessed by all of the knowledge sources This design allows for an opportunistic control strategy that enables a knowledge source to contribute towards the solution of the current problem without knowing which of the other knowledge sources will use the information
Blackboard systems are a natural progression of expert systems into a more
pow-erful problem-solving technique They generally provide a way for several highly specialised knowledge sources to cooperate to solve larger and more complex prob-lems Due to the hierarchical structure of the blackboard, each data object on the blackboard will usually have only one knowledge source that can update it Although these knowledge sources are often referred to as ‘experts’, knowledge sources are not restricted to expert systems such as the ExSysc Expert System
(ExSys 2000) or other AI systems, and include the ability to add conventionally coded software such as the artificial intelligence-based (AIB) model, to cooperate
in solving problems
Trang 5778 5 Safety and Risk in Engineering Design Many knowledge sources are numeric or algorithmic in nature (i.e the AIB blackboard knowledge source for artificial neural network (ANN) computation that
is specifically applied for processing time-varying information, such as non-linear dynamic modelling, time series prediction, adaptive control, etc of various engi-neering design problems) The use of multiple, independent knowledge sources al-lows each knowledge source to use the data representation scheme and problem-solving strategy that best suit the specific purpose of that knowledge source These specialised knowledge sources are thus easier to develop and can be hosted on dis-tributed hardware
The use of opportunistic problem-solving and highly specialised knowledge sources allows a set of distributed knowledge sources to cooperate concurrently to solve large, complex design problems However, blackboard systems are not easily developed, especially where a high degree of concurrent knowledge source execu-tion must be achieved while maintaining knowledge consistency on the blackboard
In general, blackboard systems have not attained their apparent potential, because there are no established tools or methods to analyse their performance
The lack of a coherent set of performance analysis tools has in many cases re-sulted in the revision of a poorly designed system to be ignored once the system had been implemented This lack of the appropriate performance analysis tools for evaluating blackboard system design is one of the reasons why incorporating con-currency into the blackboard problem-solving model has not generally been suc-cessful Consequently, a method for the validation of blackboard system design has been developed (McManus 1991) This method has been applied to the AIB black-board system for determining the integrity of process engineering design
Knowledge source connectivity analysis is a method for evaluating blackboard system performance using a formalised model for blackboard systems design A
de-scription of the blackboard data structure, the function computed by each knowledge source, and the knowledge source’s input and output variables are sufficient to create
a formalised model of a blackboard system (McManus 1992) Connectivity analy-sis determines the data transfers between the knowledge sources and data migration across the blackboard
The attributes of specialisation, serialisation and interdependence are evaluated
for each knowledge source This technique allows for the evaluation of a blackboard design specification before the blackboard system is developed This also allows the designer to address knowledge source connectivity problems, feedback loops and interdependence problems as a part of the initial design process Knowledge source
connectivity analysis measures the output set overlap, functional connectivity, and output to input connectivity between pairs of knowledge sources Output set overlap
is a measure of the specialisation of pairs of knowledge sources, whereas functional connectivity between pairs of knowledge sources is a measure of their serialisation, and output to input connectivity is a measure of their interdependence.
Trang 6a) The Formalised Model for Blackboard Systems Design
Knowledge source connectivity analysis requires a specification of the system de-veloped using a formalised model for blackboard systems (McManus 1992) Black-board systems can be modelled as a blackBlack-board data structure containing shared
blackboard data objects, and a set of cooperating knowledge sources that can access
all of the blackboard data objects These knowledge sources are processes that take inputs from the blackboard, perform some computation, then place the results back
on the blackboard for other design teams in a collaborative design environment
Blackboard data structure A blackboard data structure is a global data structure
consisting of a set of blackboard data objects,{d1, ,d j }, used to represent the
problem domain
Blackboard data object Each blackboard data object is a predefined data object
type with a point value or a range of values A blackboard data object, d j, is thus an object that has a single value or multiple values
Knowledge source A knowledge source, ks j, of a set of knowledge sources,β=
{ks1, ,ks j }, consists of the following:
• a set of input variables, IV = {iv1, ,iv n },
• a set of input conditions, IC = {ic1, ,ic n },
• a set of output variables, OV = {ov1, ,ov m },
• a description of the computation delivered by the knowledge source,
• a set of preconditions, PR = {pr1, , pr k },
• a set of post-conditions, PT = {pt1, , pt k } and
• an input queue, IQ.
A knowledge source’s input conditions are a set of Boolean variables used to notify
a knowledge source when one of its input variables has been updated The precondi-tions are a set of Boolean funcprecondi-tions that all must be TRUE for a knowledge source to
be activated, and the post-conditions are a set of Boolean functions that all must be TRUE for a knowledge source to post the result of its computation to the blackboard
If all of a knowledge source’s activation conditions are met while it is executing, the input queue stores the knowledge source’s input variables
There are two classes of input variables pertaining to knowledge sources: ex-plicit input variables and generic input variables An exex-plicit input variable
spec-ifies a single, unique blackboard data object that is used as the input variable to
a knowledge source A knowledge source can use only the blackboard data object specified by the explicit input variable as a valid input A generic input variable specifies a class or type of blackboard data object that can be used as the input variable to the knowledge source The knowledge source can accept an instance
of a blackboard data object of the specified class as an input variable The use of generic input variables allows development of knowledge sources that function on
a class of blackboard data objects
Trang 7780 5 Safety and Risk in Engineering Design Knowledge sources can be classified by their input variables:
• Explicit knowledge sources have only explicit input variables;
• Mixed knowledge sources have both explicit and generic input variables;
• Generic knowledge sources have only generic input variables.
Blackboard system A blackboard system is used to allow intercommunication of
knowledge sources, and acts as a shared memory that is visible to all of the
knowl-edge sources A blackboard system, B, is a tuple X,P,β,Id,θ, where:
• X is a set of blackboard data objects, X = {d1, ,d i };
• P is the set of blackboard data object states, P = V1 ·V2 · ·V i , where V iis a set
of all valid values for blackboard data object d i;
• β is the set of knowledge sources,β = {ks1, ,ksj };
• each knowledge source’s domain is a subset of P, and its range is a subset of P;
• Id is an i-vector describing the i initial values of the blackboard data objects,
Id∈ P;
• θ is a relation onβ, whereθ⊂β·β andks j , ks k ∈θ if and only if∃d j ∈ X where: d j ∈ OV and (ks j) ∧ d j ∈ IV(ks k);
• If ks j , ks k ∈θ, then ks k is a successor of ks j , and ks j is a predecessor of ks k b) Performance Analysis of the Blackboard Systems Design
The performance of a blackboard system design can be analysed in the following
manner (McManus 1991): for each knowledge source ks j inβ is an input set,Ψj,
containing all of the input variables of ks j and an output set,Φj, containing all of
the output variables of ks j
Φj = {ov1,ov2, ,ovm }
OnceΨjandΦj have been established for all ks j inβ, the setsΓj ,kandθj ,kcan
be computed for all knowledge source pairs{ks j ,ks k } inβ( j = k)
θj ,k =Φj ∩Ψk
As indicated, output set overlap is a measure of the specialisation of pairs of
knowledge sources, whereas functional connectivity between the pairs of knowl-edge sources is a measure of their serialisation, and output to input connectivity is
a measure of their interdependence
Specialisation value The output set overlap is a measure of the specialisation of
pairs of knowledge sources, whereby the setΓj ,k is computed to assess functional specialisation The cardinality of the setΓj ,kfor each pair{ks j ,ks k } inβ is a
mea-sure of the output overlap for the pair {ks ,ks } (i.e a measure of the specialisation
Trang 8of pairs of knowledge sources) Knowledge source pairs{ks j ,ks k } with a large out-put overlap imply that ks j and ks k share a large number of output variables and, thus, have similar functions Knowledge source pairs{ks j ,ks k } with a low overlap imply that ks j and ks k have different functions A proposed heuristic to measure
knowledge source specialisation is to compute a specialisation value,Ωj ,k, for each pair{ks j ,ks k } inβ Specialisation values measure the output set overlap of a pair
of knowledge sources, {ks j ,ks k } The specialisation value is computed using the
following (McManus 1992):
Ωj ,k= card(Γj ,k)
min(card(Φj),card(Φk)) (5.120)
The cardinality of the setΓj ,k divided by the minimum of the cardinalities of the sets ΦjandΦk computes a percentage of overlap between the setΓj ,k and the smaller of the setsΦj andΦk AsΩj ,kapproaches 1.0, the output overlap between
ks j and ks k increases AsΩj ,k approaches 0.0, the output overlap between ks jand
ks k decreases For the limiting cases, whereΦj ⊃Φk orΦk ⊃Φj, we know that
Ωj ,k = 1.0, and ks j and ks kcompute the same outputs—thus, the knowledge sources are not specialised However, ifΓj ,k=φ(whereφis the null value), thenΩj ,k = 0.0,
and the two knowledge sources have no common outputs and are highly specialised
in relation to each other
Serialisation value The functional connectivity between pairs of knowledge
sources is a measure of their serialisation, whereby the setθj ,k is computed to
assess serialisation The cardinality of the set θj ,k for each pair {ks j ,ks k } in β, compared to the cardinality of the setΨk , is a measure of the input overlap for the
pair{ks j ,ks k } (i.e a measure of the serialisation of pairs of knowledge sources).
Knowledge source pairs{ks j ,ks k } with a large input overlap imply that ks j and ks k
share a large number of output to input variables and, thus, form serialised execu-tion Knowledge source pairs{ks j ,ks k } with a low input overlap imply that ks jand
ks kcan execute separately A serialisation value measures the functional connectiv-ity between a pair of knowledge sources where the functional connectivconnectiv-ity is the
relative output to input ratio A proposed heuristic, therefore, to measure knowledge source serialisation is to compute a serialisation value,Σj ,k, for each pair{ks j ,ks k }
inβ Serialisation values measure the functional connectivity of a pair of knowledge sources,{ks j ,ks k }.
The serialisation value is computed using (McManus 1992):
Σj ,k=(cardθj ,k)
This heuristic computes the percentage of the input data objects for knowledge
source ks k that are provided by knowledge source ks j The cardinality of the set
θj ,kdivided by the cardinality of the setΨkcomputes a percentage of input overlap betweenθj ,kandΨk
Trang 9782 5 Safety and Risk in Engineering Design
AsΣj ,kapproaches 1.0, the percentage of overlap betweenθj ,kandΨkis greater,
and the serialisation between ks j and ks k strengthens AsΣj ,k approaches 0.0, the
serialisation between ks j and ks k weakens For the limiting cases, if Ψk ⊃Φj,
Πj ,k = 1.0, and ksj and ks k have direct serialisation Ifθj ,k=φ (whereφ is the null value), thenΣj ,k = 0.0, and the two knowledge sources are independent and
can execute concurrently
Strongly connected knowledge sources have high serialisation values These
knowledge sources form serialised execution pipelines, with each knowledge source
blocking completion of any computation for the same input data objects by other knowledge sources Unless multiple copies of the serialised knowledge sources are
developed, the serial pipelines reduce the blackboard’s capability for concurrent execution Weakly connected knowledge sources reduce knowledge source
serial-isation and increase the opportunity for concurrent knowledge source execution Knowledge source pairs that have high serialisation values are best suited for knowl-edge source integration whereby the first knowlknowl-edge source provides all of the
in-puts to the second knowledge source Such a serially connected pair of knowledge
sources can be reduced to a single knowledge source that combines the functionality
of the two
Interdependence value The output to input connectivity between pairs of
knowl-edge sources is a measure of their interdependence, whereby the setθj ,kis computed
to assess interdependence The cardinality of the setθj ,kfor each pair{ks j ,ks k } in
β is a measure of the output to input connectivity for the pair{ks j ,ks k }
Knowl-edge source pairs{ks j ,ks k } with a high output to input connectivity imply that ks k
is highly dependent on ks jfor its input variables Knowledge source pairs{ks j ,ks k } with a low output to input connectivity imply that ks k ’s inputs are independent of
ks j’s outputs
A proposed heuristic to measure knowledge source interdependence is to com-pute an interdependence value, Πj ,k, for each pair {ks j ,ks k } in β Interdepen-dence values measure the output to input connectivity between knowledge sources,
{ks j ,ks k } The interdependence value is computed using the following (McManus
1992):
Πj ,k= (cardθj ,k)
min(card(Φj ),card(Ψk)) (5.122)
This heuristic computes the percentage of overlap between the setsΦj andΨk, or
the percentage of output data objects of ks j that are used as input data objects by ks k The cardinality of the setθj ,kdivided by the minimum of the cardinalities of the sets
ΦjandΨkcomputes a percentage of overlap between the setθj ,kand the smaller of the setsΦjandΨk AsΠj ,kapproaches 1.0, the output to input connectivity between
ks j and ks k strengthens and the knowledge sources become more interdependent As
Πj ,k approaches 0.0, the output to input connectivity between ks j and ks kweakens
and the knowledge sources become independent For the limiting cases, ifΦj ⊃Ψk,
Πj ,k = 1.0, and ks j and ks khave direct output to input connectivity and are interde-pendent If the setθj ,k=φ (whereφis the null value), thenΠj ,k = 0.0, and the two
knowledge sources have no output to input connectivity and are independent
Trang 10c) Evaluation of the AIB Blackboard Model for Determining the Integrity
of Engineering Design
The AIB blackboard model for determining the integrity of engineering design in-cludes subsets of the knowledge sources and blackboard data objects that are used
by the knowledge-based expert system section This knowledge-based expert sys-tem section allows for the development of various expert syssys-tems, and is structured into facts, functions, conditions, constraints, rules and goals related to the subsets of the knowledge sources and blackboard data objects of process analysis, plant analy-sis and operations analyanaly-sis sections The primary subsets of the knowledge sources for the process analysis and plant analysis sections are described below in accor-dance with Fig 5.82 illustrating the AIB blackboard model for engineering design integrity
Process analysis section
• Let Ks1 be the process definition module This knowledge source makes use of six global data object inputs—d i1 , d i2 , d i3 , d i4 , d i5 and d i6, which can be repre-sented by the set of input variables IV6= {iv1, ,ivn }—as well as a process description input, and computes five data object outputs that can be represented
by the set of output variables OV5= {iv1, ,ivn }, for the data object outputs d o1
to d o5
The data object inputs d i1 to d i6 and data object outputs d o1 to d o5:
d i1= Plant/facility d i7= Process description
d i2= Operation/area d o1= Process sequence
d i3= Section/building d o2= Mass balance
d i4= System/process d o3= Heat balance
d i5= Assembly/unit d o4= Energy balance
d i6 = Component/item do5= Utilities balance
• Let Ks2 be the performance assessment module This knowledge source makes use of the six global data object inputs d i1 , d i2 , d i3 , d i4 , d i5 and d i6, as well as
a performance specification set, d i8 , and computes a performance output variable set, d o6
The performance specification set d i8can be represented by the set of input vari-ables IV8= {iv1, ,ivn }, where d i8=performance specification data object with
IV8= {efficiency, flow, precipitation, throughput, output, pressure, viscosity,
ab-sorption, temperature, losses, etc.}
The performance output variable set d o6can be represented by the set of output variables OV6= {ov1, ,ovm }, where d o6 is the performance output data ob-ject with OV6= {efficiency rating, flow rating, throughput rating, output rating,
yield, pressure rating, consistency, temperature rating, productivity, etc.}
• Let Ks3 be the RAM assessment module This knowledge source makes use of the six global data object inputs d i1 , d i2 , d i3 , d i4 , d i5 and d i6 , as well as a conditions description set, d , and computes a conditions failure output variable set, d