1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design - Part 77 potx

10 144 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 573,99 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

5.98 AIB blackboard model with CAD data browser option Figure 5.103 illustrates typical input data attributes for the example used to de-termine the stream surge pressure given in the si

Trang 1

744 5 Safety and Risk in Engineering Design

Fig 5.97 Petri net-based optimisation algorithms in system simulation

dynamic modelling, time series prediction, adaptive control, etc., of various engi-neering design problems A typical design problem that is ideal for ANN mod-elling is the formulation and evaluation of stream surge pressures in continuous flow processes, given in the simulation option of the AIB blackboard as illustrated

in Fig 5.97

The NeuralExpertc (NeuroDimension 2001) program, imbedded in the AIB

blackboard, asks specific questions and intelligently builds an ANN The first step in

building an ANN is the specification of the problem type, as illustrated in Fig 5.102.

The four currently available problem types in the NeuralExpert are classification, function approximation, prediction, and clustering Once a problem type is selected, the program configures the parameters based on a description of the problem These settings can be modified in the AIB blackboard, or in the NeuralExpert

Input data selection is the next step in constructing an ANN model The input

file selection panel specifies where the input data file is located by choosing the

‘browse’ button and searching through the standard Windows tree structure to find the relevant file referenced in the AIB blackboard database, or by clicking on the triangle at the right edge of the text box to indicate a list of recently used text files

in the NeuralExpert

Trang 2

5.4 Application Modelling of Safety and Risk in Engineering Design 745

Fig 5.98 AIB blackboard model with CAD data browser option

Figure 5.103 illustrates typical input data attributes for the example used to de-termine the stream surge pressure given in the simulation model option of the AIB blackboard illustrated in Fig 5.97 In this case, the sample data would represent

x1, x2, x3, x4 values of the input attributes The pressure surge Petri net given in

Fig 5.97 includes conditions of flow surge criteria that now become the ANN input

attributes, such as the pipe outlet diameter, pipe wall thickness, the fluid bulk mod-ulus, and Young’s modulus The goal is to train the ANN to determine the stream surge pressure (desired output) based on these attributes

Typical computational problems associated with artificial neural network

pro-grams, with regard to specific as well as general engineering design requirements, include the following:

Classification problems are those where the goal is to label each input with

a specified classification A simple example of a classification problem is to la-bel process flows as ‘fluids’ and/or ‘solids’ for balancing (the two classes, also the desired output) using their volume, mass and viscosity (the input) The input can be either numeric or symbolic but the output is symbolic in nature For example, the desired output in the process balancing problem is the ratio of fluids and solids, and not necessarily a numeric value of each

Trang 3

746 5 Safety and Risk in Engineering Design

Fig 5.99 Three-dimensional CAD integrated model for process information

Function approximation problems are those where the goal is to determine a

nu-meric value, given a set of inputs This is similar to classification problems, except that the output is numeric An example is to determine the stream surge pressure (desired output) in numeric values, given the pipe outlet diameter, the pipe wall thickness, the fluid bulk modulus and Young’s modulus These problems are called function approximation because the ANN will try to approximate the functional re-lationship between the input and desired output Prediction problems are also

func-tion approximafunc-tion problems, except that they use temporal informafunc-tion (e.g the

past history of the input data) to make predictions of the available data

Prediction problems are those where the goal is to determine an output, given

a set of inputs and the past history of the inputs The main difference between pre-diction problems and the others is that prepre-diction problems use the current input and previous inputs (the temporal history of the input) to determine either the cur-rent value of the output or a future value of a signal A typical example is to predict process pump operating performance (desired output) from motor current and de-livery pressure performance values

Clustering problems nformation is to be extracted from input data without any

desired output For example, in the analysis of process faults in designing for safety, the faults can be clustered according to the severity of hazard consequences risk

Trang 4

5.4 Application Modelling of Safety and Risk in Engineering Design 747

Fig 5.100 CAD integrated models for process information

The fundamental difference between the clustering problem and the others is that there is no desired output (therefore, there is no error and the ANN model cannot be trained using back propagation)

For classification problems to label each input with a specified classification, the

option is given to randomise the order of the data before presenting these to the network Neural networks train better if the presentation of the data is not ordered For example, if the design problem requires classifying between two classes, ‘fluids’ and/or ‘solids’, for balancing these two classes (as well as the desired output) using their volume, mass and viscosity (the input), the network will train much better if the fluids and solids data are intermixed If the data are highly ordered, they should

be randomised before training the artificial neural network

One of the primary goals in training neural networks in the process of ‘iterative prediction’ is to ensure that the network performs well on data that it has not been trained on (called ‘generalisation’) The standard method of ensuring good general-isation is to divide the training data into multiple datasets or samples, as indicated in Fig 5.104 The most common datasets are the training, cross validation, and testing datasets

The cross validation dataset is used by the network during training

Periodi-cally while training on the training dataset, the network is tested for performance

Trang 5

748 5 Safety and Risk in Engineering Design

Fig 5.101 ANN computation option in the AIB blackboard

on the cross validation set During this testing, the weights are not trained but the performance of the network on the cross validation set is saved and compared to

past values The network shows signs of becoming over-trained on the training data

when the cross validation performance begins to degrade Thus, the cross validation dataset is used to determine when the network has been trained as best as possible, without over-training (i.e maximum generalisation)

Although the network is not trained with the cross validation set, it uses the cross validation set to choose the best set of weights Therefore, it is not truly an out-of-sample test of the network For a true test of the performance of the network, an independent (i.e out of sample) testing set is used This provides a true indication of how the network will perform on new data The ‘out of sample testing’ panel shown

in Fig 5.105 is used to specify the amount of data to set aside for the testing set

It is important to find a minimal network with a minimum number of free weights

that can still learn the problem The minimal network is more likely to generalise well with new data Therefore, once a successful training session has been achieved, the process of decreasing the size of the network should commence, and the training repeated until it no longer learns the problem effectively

The genetic optimisation component shown in Fig 5.106 implements a genetic algorithm to optimise one or more parameters within the neural network The most

common network parameters to optimise are the input columns, the number of

Trang 6

5.4 Application Modelling of Safety and Risk in Engineering Design 749

Fig 5.102 ANN NeuralExpert problem selection

hidden processing elements (PEs), the number of memory taps, and the learning

rates Genetic algorithms combine selection, crossover and mutation operators with the goal of finding the best solution to a problem Genetic algorithms are general-purpose search algorithms that search for an optimal solution until a specified ter-mination criterion is met

Network complexity is determined by the size of the neural network in terms of

hidden layers and processing elements (neurons) In general, smaller neural net-works are preferable over large ones If a small one can solve the problem suffi-ciently, then a large one will not only require more training and testing time but also may perform worse on new data This is the generalisation problem—the larger the neural network, the more free parameters it has to solve the problem Excessive free parameters may cause the network to over-specialise or to memorise the training data When this happens, the performance of the training data will be much better than the performance of the cross validation or testing datasets

The network complexity panel shown in Fig 5.107 is used to specify the size of the neural network It is essential to start ANN analysis with a ‘low-complexity’ net-work, after which analysis can progress to a medium- or high-complexity network to determine if the performance results are significantly better A disadvantage is that medium- or high-complexity networks generally require a large amount of data

Trang 7

750 5 Safety and Risk in Engineering Design

Fig 5.103 ANN NeuralExpert example input data attributes

In the NeuralExpertc (NeuroDimension 2001) program, imbedded in the AIB blackboard, several criteria can be used to evaluate the fitness of each potential so-lution The solution to a problem is called a chromosome A chromosome is made

up of a collection of genes, which are simply the neural network parameters to be

optimised A genetic algorithm creates an initial population (a collection of chro-mosomes) and then evaluates this population by training a neural network for each chromosome It then evolves the population through multiple generations in the search for the best network parameters Performance measures of the error crite-rion component provide several values that can be used to measure the performance results of the network for a particular dataset These are:

• the mean squared error (MSE),

• the normalised mean squared error (NMSE),

• the percent error (% error).

The mean squared error (MSE) is defined by the following formula

P

j=0∑N

i=0(di j − y i j)2

Trang 8

5.4 Application Modelling of Safety and Risk in Engineering Design 751

Fig 5.104 ANN NeuralExpert sampling and prediction

where:

The normalised mean squared error (NMSE) is defined by

P

j=0



NN

i=0d i j2∑N

i=0d i j

2

N

where:

Trang 9

752 5 Safety and Risk in Engineering Design

Fig 5.105 ANN NeuralExpert sampling and testing

The percent error (%E) is defined by the following formula

%E=100

NP

P

j=0

N

i=0

|d i j − dd i j |

where:

Knowledge-based expert systems Expert knowledge of how to solve complex

en-gineering design problems is not often available Knowledge-based expert systems

are programs that capture that knowledge and allow its dissemination in the form of structured questions, to be able to determine the reasoning behind a particular de-sign problem’s solution The knowledge-based expert systems incorporated in the AIB blackboard are based on the classical approach to expert systems methodology, which incorporates the following:

Trang 10

5.4 Application Modelling of Safety and Risk in Engineering Design 753

Fig 5.106 ANN NeuralExpert genetic optimisation

• User interface,

• Working memory,

• Inference engine,

• Facts list,

• Agenda,

• Knowledge base,

• Knowledge acquisition facility.

The user interface is a mechanism by which the user and the expert system com-municate The working memory consists of a global database of facts used by rules The inference engine controls the overall execution of queries or selections related

to problems and their solutions based around the rules The facts list contains the data on which inferences are derived An agenda is a list of rules with priorities created by the inference engine, the patterns of which are satisfied by facts in the working memory The knowledge base contains all the knowledge and rules The knowledge acquisition facility is an automatic way for the user to enter or modify knowledge in the system, rather than by having all the knowledge explicitly coded

at the onset of the expert systems design

The user interface of the AIB blackboard is an object-oriented application in

which the designer can point-and-click at digitised graphic process flow diagrams

Ngày đăng: 02/07/2014, 10:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm