4.2.8.2 Error-Controlled Temporal AdaptivityAuthored by Detlef Kuhl and Sandra Krimpmann Strategies for the error controlled adaptive time integration of durabilityproblems consist of to
Trang 1Fig 4.34 Beam 1: Load-displacement curve and number of elements for different
tolerances and crit2 (Q1SPs/o, nGP0 = 16)
Fig 4.35 Beam 2: Load-displacement curve and number of elements for different
tolerances and crit2 (Q1SPs/o, nGP = 16)
computed with a very small tolerance (see the curves : tolerr = 0.00001 in
Fig-ure 4.35a) hardly differs from the result achieved with a rather large tolerance(Figure 4.35a,s : tolerr = 0.05).
plotted in Figure 4.36 As expected refinement starts in the loading area and
in the parts of the structure where the plastic strain begins to accumulate
(X ≈ 4500 mm).
provides a physically reasonable mesh refinement Additionally, in contrast tocrit1, it measures the quality of the solution accurately enough, i.e in such a
require a much higher mesh density for the same quality of solution Therefore
Trang 2E2: 0.000648603 0.0705965 0.140544 E2: 0.000648603 0.0705965 0.140544
E2: 0.000648603 0.0705965 0.140544
E2: 0.000648603 0.0705965 0.140544
Fig 4.36 Beam 2: Different states of mesh refinement (Q1SPs/o, 16 El.), contours:
accumulated plastic strain
shows the same mesh refinement when the error tolerance is set equal to anextremely small value
4.2.8.1.10.3 Biaxial Bending (Thick Plate of Uniform Thickness)
Geometry and boundary conditions are depicted in Figure 4.37 At the
boundary X = 12000 mm the displacement in X-direction is constrained Analogously at the boundary Y = 12000 mm we apply constraints in Y -
direction The material parameters and the reference load are the same as in
equidistant steps)
thick-ness direction to yield a converged result (see Figure 4.38a) The solution for
tolerr = 0.0001 (refinement up to 140 elements, see Figure 4.38b) is very close
to the one computed with a mesh of 7x7x4 elements The curves for the largertolerances (tolerr ≤ 0.0002) bring up the problem that at the beginning of the
refinement (one to two new elements) the solution stiffens, i.e the
displace-ment w becomes smaller, see the detail A in Figure 4.39 At a larger load
the solution softens again (detail B in Figure 4.39) However, the solution for
tolerr = 0.001 must be judged as worse as the result for tolerr = 0.01 where
Trang 3X YZ
1200 mm
12000 mm w
-3500 -3000 -2500 -2000 -1500 -1000 -500 0
w [mm]
Plate 1 (nGP=8, crit2, Q1SPs)
Fig 4.38 Plate 1: Load-displacement curve and number of elements for different
tolerances and crit2 (Q1SPs, nGP = 8)
no mesh refinement takes place We assume that the problem is due to thetransfer of the history variables which can never be exact The element formu-
The deficiency is easily overcome by working with a slightly smaller tolerance,where right at the beginning several new elements are inserted In summary
it can be, however, stated that the differences between the curves for ratherlarge and very small tolerances are again small
the error tolerancetolerr = 0.01, Q1SPs/o requires for a solution of the same
the solution cannot be further improved It has still not completely converged
at this point
Trang 4Fig 4.39 Plate 1: Load-displacement curve (left) and details of it (right) for
dif-ferent tolerances and crit2 (Q1SPs, nGP = 8)
Fig 4.40 Plate 1: Different states of mesh refinement (Q1SPs/o, 16 El.), contours:
accumulated plastic strain
The adaptive increase of the number of elements is visualized in Figure 4.40
where a severe element distortion is detected At this point a simultaneousrefinement in the plate plane is necessary It should be made available in thefuture
Trang 5Fig 4.41 Plate 1: Load-displacement curve and number of elements for different
load steps and crit2 (Q1SPs/o, nGP = 8, tolerr = 0.01)
40 50 60 70 80 90 100 110 120 130 140 150
Fig 4.42 Plate 1: Load-displacement curve and number of elements for different
load steps and crit2 (Q1SPs, nGP = 8, tolerr = 0.0001)
A point of further interest is the robustness of the algorithm For this
curves computed with different load steps (see Figure 4.41) We obtain the
quite impressive result that the total load (ν = 60) can be applied in one
step which shows the robustness of the proposed algorithm The resulting
0.01) The new elements appear in almost the same order as for much smaller
load steps
influenced by the load step Fewer new elements are generated if the load step
is chosen to be very large Moreover the solution shows the peculiar stiffeningagain, i.e the insertion of a few new elements does not lead as expected to asofter (in this case better) solution
Trang 64.2.8.2 Error-Controlled Temporal Adaptivity
Authored by Detlef Kuhl and Sandra Krimpmann
Strategies for the error controlled adaptive time integration of durabilityproblems consist of to main parts: The computation of error measures andthe control algorithm of the time step size Local error estimates and differentlocal error indicators can be used as controlling parameter in adaptive timestepping schemes
4.2.8.2.1 Local a Posteriori h- and p-Method Error Estimates
In order to obtain an estimate of the local time integration error, error
simultaneously performed time integrations of higher accuracy are used Asillustrated in Figure 4.43 for Newmark and Galerkin type integrationschemes the integration quality can be improved by reduction of the time
step size Δt Reasoned by the fact that the time integration error e ∼ Δt o is
proportional to the time step size Δt to the power of the order of accuracy o, the error e can be significantly reduced if the time step size is divided by m.
The resulting improved solutionuΔt/m n+1 allows for the estimation of the local
time integration error by the h-method.
The order of accuracy of Galerkin type integration schemes is controlled by
the polynomial degree p whereby the higher the polynomial degree the higher
h-method error measures p-method error measures
enlarged time step size
basic time steps
decreased time step size
t Δt
Fig 4.43 Illustration ofh-method error estimates and indicators associated with
Newmarkand Galerkin time integration schemes as well as p-method error mates and indicators for p-Galerkin time integration schemes
Trang 7esti-the order of accuracy For p-Galerkin type integration schemes a higher order
accurate comparison solution can be alternatively generated by Galerkin
integration schemes of a higher polynomial degree p + m or a Galerkin integration with a smaller time step size Δt/m, compare Figure 4.43 The
resulting improved solution, denoted by uΔt/m n+1 , allows for the estimation of
the local time integration error by the h-method The resulting improved
solutionup+m n+1 allows for the estimation of the local time integration error by
the p-method.
Both present error estimates of the h- and p-method are excellently
appro-priate to estimate the real time integration error This exceeding quality ofthe error measures enforces, of course, a very high numerical effort As a con-sequence of this, these error estimates are applied if highly robust adaptiveintegrations and absolutely reliable calculations of multiphysics problems arenecessary Since unreliable prognoses of the long term behavior of concretestructures are worthless, it is highly recommended to invest the additional
computational time for the error estimates of the h- and p-method If the
solution behavior of the durability problem is completely understood by the
engineer he may switch to the error indicators of the h- and p-method for
further parametric studies Furthermore, these error estimates are applied tostudy the numerical properties of the present time integration schemes in thecontext on non-linear durability problems
4.2.8.2.2 Local a Posteriori h- and p-Method Error Indicators
Error indicators of the h- and p-method are only applied, if the numerical
effort for the error estimates discussed in the previous section is significantly
to high These kind of error indicators are characterized by comparison lutions of lower quality The present error indicators are either based on the
so-h-method
or the p-method, compare Figure 4.43.
4.2.8.2.3 Local Zienkiewicz a Posteriori Error Indicators
Motivated by the high numerical effort of h- and p-method error estimators
as well as indicators, alternative error measures using the Taylor expansion
developed This basic idea was firstly published by [868] and later enriched byseveral extensions by [494, 675, 833, 834, 566] So called Zienkiewicz error
Trang 8Table 4.6 Error indicators for Newmark type time integration schemes ( eZX:[868], eLZW: [494], eRS: [675],eR: [676]) for non-linear second order initial valueproblemsri(¨u, ˙u,u) =r
indicators compare the Taylor expansion of the primary variable at the end
of the time step
or-by the estimation of the higher order time derivatives ˙¨un and ¨u¨n Namely,
• the Zienkiewicz Xie error indicator,
• the Li Zeng error indicator,
• the Riccius error indicator, and
• the Rickelt error indicator
are special cases of the error indicator defined by equation (4.141) Since this ror indicators are defined in terms of accelerations different derivations for firstorder initial value problems compared to second order initial value problems onthe basis of equation (4.141) are required They substitute ¨un and ˙¨unby ade-quate difference approximations in terms of velocities, see Table 4.7
Trang 9er-Table 4.7 Error indicators for Newmark type time integration schemes ( eZX:[868],eLZW: [494],eRS: [675],eR: [676]) for non-linear first order initial value prob-lemsri( ˙u,u) =r
4.2.8.2.4 Adaptive Time Stepping Procedure
As a basis of adaptive time stepping procedures the error vectors (4.136,4.137,4.141) are transformed to a scalar valued relative error measure by using
various alternative reference values uref:
o represents the order of accuracy of the basis time stepping scheme For
e > ν2η the last time step is repeated with Δtnew and for e < ν1η the next
time step is solved with Δtnew
Trang 10loop over time steps n
Fig 4.44 Algorithmic set-up for the error controlled adaptive time integration by
Newmark-α integration schemes combined with error indicators and the adaptive
time step control by [868], compare Figure 4.24
loop over time steps n
scalar valued relative error measure e = e ref
error based adaptive time step control (Figure 4.44)
Fig 4.45 Algorithmic set-up for the error controlled adaptive time integration by
Newmark-α or p-Galerkin methods and h-method error estimates/indicators
4.2.8.2.5 Algorithmic Set-Up
A typical algorithmic set-up for the adaptively controlled time integration ofnon-linear second order semidiscrete initial value problems is shown in Figure4.44 In this overview the Newmark type integration scheme by [569, 198]
is combined with error indicators and the adaptive time stepping procedure
by [868] as a representative example The boxes in Figure 4.44 illustrate linkswith the element and material levels of multifield durability finite elementprograms (compare Section 4.2.7) As illustrated by Figures 4.45 and 4.46
Trang 11loop over time steps n
scalar valued relative error measure e = e ref
error based adaptive time step control (Figure 4.44)
Fig 4.46 Algorithmic set-up for the error controlled adaptive time integration by
p-Galerkin methods and p-method error estimates/indicators
the error analysis using improved comparison solutions performed by the
h-or p-method causes high numerical expenses In h-order to avoid this, errh-or indicatiors of the h- or p-method can be used as basis for the adaptive time
stepping scheme (compare [452, 459])
4.2.9 Discontinuous Finite Elements
Authored by Klaus Hackl
4.2.9.1 Overview and Motivation
Authored by Markus Peters and Klaus Hackl
To simulate the displacements in the vicinity of a crack there are eral numerical possibilities In this section we focus on the finite elementmethod using elements with embedded discontinuous fields — on the one
sev-hand the Strong Discontinuity Approach (SDA) and the Enhanced Assumed
Strain (EAS) which is derived from the SDA, on the other hand the eXtended Finite Element Method (XFEM) All these methods have in common that
the strain or displacement field is enhanced to allow specific discontinuitiesindependent of element edges
The SDA and the EAS concepts are based on the same idea of enhancingthe strain field so that a discontinuous strain field is obtained across the crack.For this reason the stress field is discontinuous while the displacement field isstill continuous
In contrast to that by using the XFEM concept enhanced displacementsare used In addition this enhancement contains functions which span theasymptotic near tip displacement field So the displacements and stresses arediscontinuous across the crack
Trang 124.2.9.2 Concepts
Authored by Markus Peters and Klaus Hackl
The discontinuous concepts which are presented in this book are the Strong
Discontinuity Approach (SDA) and the Enhanced Assumed Strain (EAS)
which are described in Section 4.2.9.2.2 and the eXtended Finite Element
Method (XFEM) which is discussed in more detail in Section 4.2.9.2.1 4.2.9.2.1 Extended Finite Element Method (XFEM)
The eXtended Finite Element Method (XFEM) is an efficient way to
calcu-late the displacements and stresses of the near tip field The ansatz for thedisplacements in the finite element approximation are enhanced by using func-tions which span the asymptotic near tip displacement field This improvesthe accuracy of the approximation results and the mesh dependency can bereduced significantly
The concept of the XFEM is based on the Partition of Unity which is
described in the following section By using it the finite element approach isenhanced and results in the XFEM that is described in Section 4.2.9.2.1.2 Thefunctions which are used for this enhancement are those functions that spanthe function space of the near tip field which is derived in Section 4.2.9.2.1.2.After deriving the XFEM displacement field it is necessary to develop a newintegration method which converges accurately for singular functions (see4.2.9.2.1.3) To improve the accuracy of the XFEM results a p-version usinghierarchical higher order Legendre polynomials 4.2.9.2.1.4 is recommended
At last a three dimensional implementation of the XFEM is introduced inSection 4.2.9.2.1.5 and the differences between the XFEM for linear elasticfracture mechanics and cohesive cracks are discussed in Section 4.2.9.2.1.6
4.2.9.2.1.1 Partition of Unity
The idea of the XFEM is based on the principle of the Partition of Unity published in [526] The keynote of the Partition of Unity is to approximate
known functions into the approximation procedure
The Lagrange polynomials which were used as standard shape functionshave the following property inside one element
Trang 130 1 2 3
-60 -40 -20 20 40 60
Fig 4.47 Function to be approximated, equation (4.147)
which means that an arbitrary function can be approximated by a sum ofthe function multiplied with the shape functions
The possibility arising from equation 4.146 becomes more clear using anexample The following function has to be approximated:
f (x) = 10 (−0.9 + x) + 10 (−0.9 + x)5+ 0.5 cos(0.9 − x)3
The function is divided into three parts (cp figure 4.47) In each part it is
evaluated at p + 1 equidistant points to compute an interpolation polynomial whose polynomial order is varied with p = 1, , 8 In the middle partition the function is approximated by using a sum of polynomials of order p using
In equation 4.149 the knowledge of the position and the kind of the singularity
are defined as
L in (x) = (x − x0) (x − x i−1 )(x − x i+1 ) (x n)
(x i − x0) (x i − x i−1 )(x i − x i+1 ) (x i − x n). (4.150)The results of the approximations are shown in figure 4.48 It can eas-
ily be discovered that by using equation 4.149 the function f (x) is
ap-proximated much better for lower polynomial order than by using equation4.148 The right and left part are approximated without singularity by usingequation 4.148
We also refer to [526] In [192] the Partition of Unity Method is investigated
concerning the XFEM and the included blending elements The accuracy inthese blending elements of the displacements and stresses using the XFEM isalso illustrated in Section 4.2.9.2.1.4
Trang 1420 40 60
20 40 60 80
Fig 4.48 Function to be approximated from equation (4.147) together with the
approximations, left from equation (4.148), right from equation (4.149)
Trang 15Fig 4.49 Definition of normal and tangential vector
4.2.9.2.1.2 XFEM Displacement Field
It was shown in Section 4.2.9.2.1.1 that the Partition of Unity Method
offers the possibility to enrich the finite element approximation by analyticfunctions and to implement them into the displacement field By using thefinite element method the continuity of the displacement field can be ensured.The XFEM displacement field can be written as
ˆ
u = u i N i(x) + bj N j(x)H(x) + c kl N k(x)F l(x) (4.151)
It contains
• the standard FEM approximation (u i N i)
• a term to model the crack opening (b j N j H), with b j representing the half
of the crack width and H as the Heaviside function
smallest distance to the pointx,Enis the normal vector to the crack withits orientation deemed to beEt ×En=Ez The vectorEtis the tangentialvector to the crack (cp Figure 4.49)
• the near tip displacement field (c kl N k F l ) The functions F l herein spanthe asymptotic near tip displacement field (see Figure 4.50)
F l=/√
r sin
ϕ2
, √
r cos
ϕ2
, √
r sin
ϕ2
sin (ϕ) , √
r cos
ϕ2
sin (ϕ)
0(4.152)The element stiffness matrix can now be written as