1. Trang chủ
  2. » Công Nghệ Thông Tin

Lifetime-Oriented Structural Design Concepts- P20 ppt

30 214 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 607,85 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Thus, the computation of time-variant, butalso time-invariant failure probabilities of a structure, in total or in parts, gov-erns the reliability analysis approach.. where the quantitie

Trang 1

Λ= 1

can be calculated with the matrix logarithm ln( •) (see [228]) The complex

eigenvector matrixΦ (4.317) is similar in discrete-time and continuous-timebut includes an abritrary state space transformation Therefore, the trans-formed complex eigenvector matrix

Authored by Dietrich Hartmann, Yuri Petryna and Andr´ es Wellmann Jelic

The computer-based determination and analysis of structural reliabilityaims at the realistic spatiotemporal assessment of the probability that givenstructural systems will adequately perform their intended functions subject to

Trang 2

existing environmental conditions Thus, the computation of time-variant, butalso time-invariant failure probabilities of a structure, in total or in parts, gov-erns the reliability analysis approach In addition, based on the success gained

in reliability analysis in the past, it becomes more and more popular to extendthe reliability analysis to a reliability-based optimum design By that, struc-tural optimization of structures, which often leads to infeasible structural lay-outs in sensitive cases, naturally incorporates probabilistic effects in the op-timization variables, criteria and constraints making real world optimizationmodels According to the fact that various modes of failure are customarily pos-sible, a formidable task, in particular for large and complex structural systems,

is to be solved In the following sections the currently most powerful approaches

in reliability analysis are described to demonstrate their possible potentials

4.4.1 General Problem Definition

Authored by Dietrich Hartmann, Yuri Petryna and Andr´ es Wellmann Jelic

A reliability analysis aims at quantifying the reliability of a structureaccounting for uncertainties inherent to the model properties (material pa-rameters, geometry) as well as environmental data (loads), respectively The

quantification is achieved by estimating the failure probability Pfof the ture For problems in the scope of civil engineering, this probability depends

struc-on the random nature of the stress valuesS as well as the resistance values

R, as depicted in Figure 4.103 The structural failure at the particular time

T is defined by the event R < S, leading to a failure probability

Fig 4.103 General definition of the failure domain depending on scattering

resis-tance (R) and stress (S) values

Trang 3

where the quantities r, s are realizations of the random values R and S,

where FR(s), fS(s) represent the general cumulative distribution function

(cdf) of the resistance values and the probability density function (pdf) ofthe stress values, respectively

Mathematically, the random nature of the values R and S is modelled

in terms of a vector of random variables X = {X1, X2, , Xd} T and thecorrespondent joint probability density function

fx (x) = P (x d < X d ≤ xd + dx)

In this context, the parameter d quantifies the total number of random

variables which corresponds to the stochastic dimension of the researchedstructural reliability problem By using the joint probability density function

in eq (4.322), the failure probability in eq (4.321) is reformulated to

Pf= P [g(X) ≤ 0] =



g(x,T )=0

g(x, T ) = 0 is the relevant time-dependent limit state function for a

prescribed failure criterion and divides the safe state from the unsafe state asfollows

g (x, T ) = ≤ 0 failure

In general, multiple distinct limit state functions may be defined In thefollowing, however, only one limit state function is considered for a betterreadability

By solving the multidimensional integral in Eq.(4.323) an estimate for the

failure probability at the point in time T is quantified Additionally to the

time-dependent formulation of the limit state function in eq (4.324), alsothe stress values S as well as the resistance values R of certain structural

problems may exhibit a significant time dependency As a consequence, theformulation of the resulting failure probability has to incorporate this timedependency as follows

Trang 4

Pf(t) = P ( R(t) < S(t)) = FT(t) (4.325)Thus, by evaluating this failure probability for multiple discrete time points

t i the evolution of Pf(t) can be estimated.

Following the above explained differentiation between time-variant andtime-invariant modelling of reliability problems, consequently, also the corre-sponding solution methods for solving these reliability problems are presentedseparately in the following

4.4.2 Time-Invariant Problems

Authored by Dietrich Hartmann, Yuri Petryna and Andr´ es Wellmann Jelic

Existing methods for solving time-invariant reliability problems can bemainly separated into three different groups: analytical solutions, approxi-mate methods and simulation methods In the initially named group an an-alytical, closed-form solution of the multidimensional integral in eq (4.323)

is researched However, this approach is only realizable for low-dimensionalreliability problems with a small number of random variables As structuralreliability problems in civil engineering primarily comprise several randomvariables together with nonlinear limit state functions this analytical approachcan not be applied For the analysis and solution of structural reliability prob-lems the approximate and simulation methods are most favorable so that theyare to be explained more detailed

4.4.2.1 Approximation Methods

Well developed methods for approximating the failure probability are FORMand SORM (First-Order and Second-Order Reliability Methods) These areanalytical solutions converting the integration into an optimization problem

In order to simplify the calculation the distribution functions of the randomvariables and the limit state function are transformed into a standardizedGaussian space, as outlined in Figure 4.104 This transformation is defined

via the Cumulative Distribution Function

Trang 5

Fig 4.104 Standardization of an exemplary 2D joint distribution function for a

subsequent FORM/SORM analysis

in almost all cases The FORM and SORM now simplify these functions culating linear and quadratic tangential surfaces respectively These surfaces

cal-are adapted in the so-called design point y ∗ This point of the limit statefunction h(y) is defined via the shortest distance (e.g in FORM)

between h(y) and the coordinate origin of the standardized Gaussian space.

From this distance measure the safety index

Trang 6

A main computational task in these methods is finding the design point bymeans of suitable search algorithms Conceptually simple analytical algorithms– like the Hasofer-Lind-algorithm [356] or the derived Rackwitz-Fiessler-algorithm [654] – have been developed initially and are still used for well-behaved

reliability problems As the search of the design point y ∗can be formulated in

terms of an optimization problem, alternatively, also gradient-based

optimiza-tion strategies like the Sequential Programming method (SQP) [64] are heavily

employed More detailed information on FORM/SORM and, particularly, onfurther developed derivatives of these methods are presented in [653]

A critical view on this approximate methods conclude to the followingstatements In general, these methods only approximate the researched fail-ure probability by ignoring existing non-linearities, e.g in the limit statefunction, such that an significant error may be introduced providing onlypoor information about this possible error Furthermore, the above explained

identification of the design point y ∗ by means of optimization strategies may

only deliver a local minimum of the limit state function, possibly ignoring aglobal minimum Another disadvantage is the low numerical efficiency whensolving high-dimensional structural reliability problems (high-dimensional in

terms of number d of random variables) This low efficiency results from the

computation of gradients in multiple point of the limit state function as thiscomputation itself comprise – in most cases – multiple FE analyses In thiscontext, the authors in [711] state a lack of robustness, accuracy and compet-

itiveness compared to simulation methods for d > 30 An exemplary analysis

of the influence of the number d on the results is given in [653].

Conversely, the approximate methods are well suited for the computation

of small values of failure probability, say Pf ≤ 106, when reliability problems

with a small number d of random variables are to be analyzed Also problems

with multiple limit state functions (union or intersection of failure domains)can be analyzed very efficiently when extended version of FORM/SORM –like summarized in [653] – are employed Due to this high efficiency for low-dimensional problems (in terms of random variables) these approximate meth-ods are widely used in the scope of Reliability-Based Design Optimization(RBDO, see Section 4.5), as stated in [300, 519]

4.4.2.2 Simulation Methods

Authored by Dietrich Hartmann, Andr´ es Wellmann Jelic and Yuri Petryna

In contrast to the approximation methods named above the class of Monte Carlo Simulations (MCS) has to be mentioned These methods use the given

density functions to create multiple sets of realizations of all random variables.For each set of realizations, a deterministic analysis of the researched limit

state function g(x) is performed, in civil engineering dominantly a structural

analysis using the Finite Element Method Afterwards, the results are ated concerning failure or survival In order to simplify the description of the

evalu-analysis results an Indicator function

Trang 7

The big disadvantage of the classical Monte Carlo Simulation is that the

accuracy of the estimated results are proportional to 1/ √

n Therefore, an

in-crease of accuracy by one order of magnitude demands an inin-creased execution

of discrete simulations by around two orders of magnitude The main reason

is the clustered generation of realizations of the random variables near theirmean values As the demanded failure probabilities in structural engineeringare very small, an uneconomic number of simulations have to be performed in-tending to get good estimations Consequently, the class of variance reducingmethods have been developed based on the classic Monte Carlo Simulations

Some variations are e.g Importance Sampling, Stratified Sampling or Adaptive Sampling, more details can be found in [159, 706].

4.4.2.2.1 Importance Sampling

Representative for the above named variance-reducing simulation methods themain principles of Importance Sampling will be explained shortly The Impor-tance Sampling method moves the main generation point for realizations near

the design point y ∗, shown in eq (4.329), and then defines a new simulation

density function h(v) in y This expands the integral in Eq.(4.323) to

Trang 8

using m simulation runs and the sample v n defined by h In order to calculate

approximate estimates for the failure probability a good choice of the sampling

density h(v) is essential The variance of eq (4.337) is

Var[Pf] = 1

m − 1

%1

The exact solution for Pf is obtained for a proportional definition of h V(v)

to the real density function f X(v), which, however, implies the knowledge of

the searched probability Instead, [760] proposes the use of the original density

function of f V (v), a normal or a uniform distribution The design point ycan

be determined from a pre-executed FORM or SORM calculation, respectively

4.4.2.2.2 Latin Hypercube Sampling

Stochastic modelling of random values for reliability analysis using directMonte-Carlo simulations requires a huge number of samples, if the failureprobability is small In other cases, one requires solely statistical characteris-tics of structural response such as displacements or stresses estimated over acertain range of input values Sometimes, critical parameter combinations cor-responding to structural failure conditions, i.e to limit states, are of interest.For those purposes, the number of the required simulations can be signifi-cantly reduced by special simulation techniques such as the Latin HypercubeSampling (LHS) [285]

According to the direct Monte-Carlo approach, uniformly distributed

ran-dom values x k , k = 1, 2, are generated within the probability range [0, 1] and then transformed into the actual random samples of a certain variable X k by

means of its inverse probability function X k = F −1

X (x k) In such a way, a

uni-form distribution x k can be ”mapped” onto an arbitrary distribution function

of interest For most practically important statistical distributions, like thenormal distribution, the probability density function (PDF) is concentrated

more or less around the mean value Thus, rare values X k corresponding tothe tales of the PDF can be reliably generated only within a large number ofMonte-Carlo simulations

The main idea of the Latin Hypercube Sampling is to divide the probability

range [0, 1] in a N sim number of equal intervals and take their centroids, domly permutated, for the mapping onto the probability function of interest

ran-(Figure 4.105) At that, N simdenotes the number of simulations resp the size

of the set of samples It is evident that LHS covers the entire range of values

much better than the direct MCS for the same, relatively small number N sim.The applications of LHS to stochastic structural analysis in [474, 624] con-firm that already ten to hundred LHS simulations provide acceptable results

Trang 9

Fig 4.105 Comparison of Latin Hypercube Sampling and Monte-Carlo Simulation

Figure 4.105 illustrates the difference between LHS and MCS by means of

N sim= 50 simulations: The histogram of the random set of a Gaussian able generated by LHS is almost ideal compared to that generated by MCS

vari-using the same N sim

Latin Hypercube Sampling has been successfully applied in [624] to tic sensitivity analysis of a reinforced/prestressed concrete bridge and to cal-culation of the limit state points in the framework of the Response SurfaceMethod (Chapter 4.4.2.3)

stochas-4.4.2.2.3 Subset Methods

A novel and very promising simulation method called Subset simulation Sim) has been proposed by Au & Beck in [67] for estimating small Pfvalues.This method reduces the numerical effort compared to direct MCS by express-ing small failure probabilities as a product of larger, conditional probabilities.These conditional probabilities are estimated for decreasing intermediate fail-ure events (subsets){Fi}m

(Sub-i=1 such that

is defined as the product of all conditional failure probabilities P (Fi+1|F i) By

selecting the intermediate failure events F iappropriately, large corresponding

Trang 10

failure probability values are achieved such that they can be computed efficiently

by direct Monte Carlo estimators

Three variants of Subset Simulation have been developed so far namely Sim/MCMC, SubSim/Splitting and SubSim/Hybrid All variants are based onthe same adaptive simulation procedure, however, the differ in the generation

Sub-of the next conditional sample when reaching an intermediate failure event

A general summary of these variants together with their application in thecontext of a benchmark study is given in [68]

This benchmark study on reliability estimation in higher dimension of tural systems is organized since 2004 by the Institute of Engineering Mechan-ics, University of Innsbruck, Austria (Prof Schu¨eller, Prof Pradlwater) Theintermediate results of this benchmark, presented in [710], attest a generalapplicability together with a very high computational efficiency to almost allSubset Simulation variants

struc-4.4.2.3 Response Surface Methods

The reliability assessment of structures is usually focused on the evaluation

of the failure probability:

This task includes, on one hand, an expensive structural analysis for

determi-nation of the limit state function g(X) and, on the other hand, the solution

of the multi-dimensional integral (4.342)

The reliability analysis of large structures imposes a number of typical ficulties influencing the choice of an appropriate approach to calculate failureprobabilities:

dif-• the need of multiple and expensive nonlinear analyses of the entire

structure;

• generally implicit limit state functions, which can be determined only for

discrete values of governing parameters;

• the vagueness about the parameters dominating the failure probability.

The Response Surface Method (RSM) [271, 160, 712], combined with efficientMonte-Carlo simulations [712], helps to avoid at least the first two difficulties.Its application includes in general the following steps

First, the limit state points are determined by means of deterministic linear structural simulations, as described above The corresponding critical

non-values of the random variables X (k)belong to the implicit limit state function

g(X) = 0 and satisfy global equilibrium of the system, for example in static

case:

F I (u, X (k) ) = λP (X (k)) X (k) g(X (k))=0 (4.343)

Trang 11

In the second step, the actual limit state function g(X) - the response surface

- is approximated by an analytical function g ∗ (X) providing the minimum

least square error for a set of limit state points X (k) , k = 1, , n p :

are most frequently used approximations [709] For n V random variables one

needs to find at least n p = n V +12(n V (n V + 1)) limit state points in order to

determine the polynomial coefficients a, b, c uniquely.

In the last step, the failure probability is calculated by means of statistical

simulation of the analytical limit state function g ∗ (X) (4.345):

to arbitrary limit states and failure criteria, for linear as well as nonlinear,static and dynamic problems in similar manner A clear separation of de-terministic structural analysis, of the response surface approximation a andstatistical simulations through simple interfaces in form of the limit state

points X (k) and the coefficients a, b, c, allows for independent use of the best

currently available state-of-the-art solutions at each step Consistent ear structural analysis of arbitrary sophistication and probabilistic reliabilityassessment without theoretical limitations become possible One of the prac-tical advantages of the RSM is its availability in form of commercial softwarelike COSSAN [708]

nonlin-The disadvantages of the RSM are mainly related to the accuracy of the

approximation g ∗ (X) in cases of complex, fold-like or only piecewise-smooth

limit state functions [688] Such situations occur, for example, due to differentpossible failure modes in dynamic systems For static or long-term problems,each limit state function is typically defined with respect to the associatedload cases and, therefore, can be mostly distinguished from other limit states.Due to the fundamental advantages of the RSM, new types of approximationsand algorithms are currently under development, see XX for example.According to [709], the maximum number of random variables acceptable

due to efficiency reasons is currently about n < 20 Therefore, the choice

Trang 12

of representative sets of random parameters is a challenging task for ity analysis as well, but especially by use of the Response Surface Methods.Therefore, a preliminary sensitivity analysis is required in order to weight theimpact of each random parameter on the failure probability.

reliabil-4.4.2.4 Evaluation of Uncertainties and Choice of Random

Variables

Many parameters in structural analysis are not known exactly and thus duce uncertainties Those of the input information can be generally classifiedinto load, material and structural uncertainties Additionally, we must ac-count for model uncertainties and output uncertainties mirrored in structuralresponse variables

intro-We consider in this contribution only stochastic approaches to handle certainties in structural and reliability analysis An overview of uncertaintymodels with respect to stochastic finite elements is given in [521] Stochasticuncertainties and models are already a part of the Probabilistic Model Codedeveloped by the Joint Committee for Structural Safety [815]

un-For reasonable computer expenses, only the most important variables shall

be treated statistically On the other hand, the set of selected random ables shall reflect all principal sources of uncertainty for realistic response

vari-predictions The importance of each uncertain parameter X i, independently

of its origin, can be quantified by its contribution σ2

As limit state functions are generally nonlinear, the gradients (4.348) are

different for different points X Therefore, a special sensitivity analysis shall

be performed by calculating the gradients (4.348) on a grid of points within a

physically meaningful range of values of the considered random variables X i.Usually, it is sufficient to consider the gradients at the boundaries and in thecenter of the domain of interest

If the importance measure of X i , estimated by σ2

i, exceeds a given thresholdvalue TOL :

Trang 13

4.4.3 Time-Variant Problems

Authored by Dietrich Hartmann, Yuri Petryna and Andr´ es Wellmann Jelic The time-dependent formulation of Pf in eq (4.325) is equivalent to thedistribution of the first passage point of the time-dependent state functioninto the failure domain As this distribution is rarely known explicitly, corre-spondent numerical algorithms have to be employed The existing algorithms

in the literature can be assigned to one of the three following categories

4.4.3.1 Time-Integrated Approach

Basic idea of this approach is the transformation of the time-variant problem

eq (4.325) to a time-invariant problem This transformation is accomplished

by means of an integration of the stress values S as well as the resistance values R over the targeted lifetime T L,D The failure probability

Pf(T L,D ) = P [g (Rmin(T L,D ), Smax(T L,D))≤ 0] (4.350)

is estimated based on the extreme values distributions of the{R, S} values,

like proposed by Wen & Chen in [831]

A main disadvantage of this approach is a probable overestimation of the

failure probability value Pf as the considered extreme values {Rmin, Smax}

rarely occur simultaneously As a consequence, equivalent time-independentload combinations, like published by Wen in [830], have to be defined for eachresearched structural reliability problem in order to estimate realistic results

of Pf

4.4.3.2 Time Discretization Approach

Basically, this approach overrides the above named drawback of the integrated approach by defining the extreme values {Rmin, Smax} for a time period ΔT i shorter than the demanded lifetime T L,D, e.g a duration of a singlestorm or, alternatively, one year

time-At first, the failure probability

Trang 14

within the time period ΔT i is computed based on the extreme values X

cor-responding to this period Subsequently, by solving the hazard function

h T L,D (T ) = f T L,D (T )

as defined in the literature ([47, 525]) for the time T the researched failure

probability can be estimated as follows:

This time discretization approach allows the approximate computation of

time-variant Pfvalues Thereby, based on the fact that variance-reducing ulation techniques for time-invariant reliability analyses can be used for the

sim-discrete time period ΔT ithe overall computation can be accomplished within

a relatively short runtime A restriction to be stated in this context is theinefficiency of this method for the solution of dynamical structural problemslike the following example analyzed in Section 4.6.4

4.4.3.3 Outcrossing Methods

Within this approach the time-variant reliability problem in eq (4.325) isformulated as an outcrossing or first passage problem, respectively, based on

the positive outcrossing rate ν ξ+ and the failure threshold ξ Analogously to

the time-invariant solution methods, also, approximation as well as simulationmethods have been developed in order to solve this first passage problem.The existing approximation methods have drawbacks with respect to theirapplicability Rackwitz states in [653] that only specific categories of prob-lems have been solved by employing these methods, e.g problems with Gaus-sian vector processes or, alternatively, with rectangular wave renewal processes[152]

Conversely, the existing simulation methods exhibit a general applicabilityallowing the runtime-efficient estimation of the time-variant failure probabil-ity For that, the demanded evolution of the failure probability over the timecan be computed by estimating the first passage probability

Pf(τ ) = P (T ≤ t) = FT(t) =



Ω

I e (t, x0, ω)dF (ω) (4.354)

in a time interval [0, t] The vector x0 contains the initial conditions of all

stochastic processes and ω is a probability parameter in the probability space

Ω, respectively Furthermore, the first excursion indicator function

Trang 15

I e (t, x0, ω) = 1 for∃τ ∈ [0, t] : yω (τ ) ≥ ξ failure domain

0 for∀τ ∈ [0, t] : yω (τ ) < ξ safe domain (4.355)

is evaluated at a given time t and indicates whether the resulting stochastic

process yω (τ ) exceeds a predefined threshold level ξ.

For the solution of the time-variant reliability problems dealt with in the

Collaborative Research Center 398 the Distance-Controlled Monte Carlo ulation (DC-MCS) has been adapted The DC-MCS method was proposed by

Sim-[647] and facilitates a runtime-efficient solution of complex dynamical systems

It is based on a time-variant MCS including a vector w t (ω) of weight values for each of the nsimrealizations in the generated ensemble Initially, all vectorelements are set to a value

of the Russian Roulette & Splitting (RR&S) technique which doubles

’impor-tant’ realizations (S) and deletes ’unimpor’impor-tant’ ones (RR) Thereby, a prioriundefined importance is quantified during the runtime by means of an evolu-tionary distance criterion presented in [647]

4.4.4 Parallelization of Reliability Analyses

Authored by Dietrich Hartmann and Andr´ es Wellmann Jelic

Processing uncertainties by means of probabilistic methods to determinestructural reliability results in exeptionally increased computational efforteven if only moderately complex structures are to be considered This ob-vious difficulty, known already for a long time, forms the main obstacle for

a rapid and enthusiastic acceptance of probabilistic methodologies in cal engineering Needless to say that nevertheless the application of uncer-tainty methods is getting mandatory in the time to come In this context,parallel processing or distributed computing, including modern methods ofautonomous computing, e.g agent-based parallelization, appear to be an ap-propriate way of overcoming the dilemma and existing drawbacks becauseparallelization of reliability analysis enables drastic cuts of the computer timerequired for a given task In addition, cost barriers placed by expensive spe-cial parallel computer systems in the past have become obsolete as clustersmade out of customary personal computers are available as well as afford-able for civil engineering institutions From the viewpoint of algorithms orsoftware implementation, reliability analysis methods furthermore allow for

Ngày đăng: 01/07/2014, 11:20

TỪ KHÓA LIÊN QUAN