Using the information stored on the background grid and the sources, compute the desired element size and shape for the points of the current mesh.. Define a new point inewp at the centr
Trang 1D1 Assume given a boundary point distribution.
D2 Generate a Delaunay triangulation of the boundary points
D3 Using the information stored on the background grid and the
sources, compute the desired element size and shape for the points
of the current mesh
Define a new point inewp at the centroid of ielem;
Compute the distances dispc(1:4) from inewp to the four nodes
of ielem;
Compare dispc(1:4) to the desired element size and shape;
If any of the dispc(1:4) is smaller than a fraction
α of the desired element length: skip the element (goto D6);
Compute the distances dispn(1:nneip) from inewp to the new
points in the neighbourhood;
If any of the dispn(1:nneip) is smaller than a fraction
βof the desired element length: skip the element (goto D6);
Store the desired element size and shape for the new point;
enddo
D7 if(nnewp.gt.0) then
Perform a Delaunay triangulation for the new points;
goto D.5endif
The procedure outlined above introduces new points in the elements One can alsointroduce them at edges (George and Borouchaki (1998)) In the following, individual aspects
of the general algorithm outlined above are treated in more detail
to correct, and the triangulation process breaks down Baker (1987) has determined thefollowing condition:
Given the set of pointsP := x1 ,x2, ,xn with characteristic lengths dmax= max|xi−
2
Trang 2Consider the generation of a mesh suitable for inviscid flow simulations for a typical transonicairliner (e.g Boeing-747) Taking the wing chord length as a reference length, the smallestelements will have a side length of the order of 10−3L, while far-field elements may be
located as far as 102L from each other This implies that = 10−10, which is beyond the
10−8accuracy of 32-bit arithmetic For these reasons, unstructured grid generators generally
operate with 64-bit arithmetic precision When introducing points, a check is conducted forthe condition
|d p − R i|2< , (3.33)
where is a preset tolerance that depends on the floating point accuracy of the machine If
condition (3.33) is met, the point is rejected and stored for later use This ‘skip and retry’technique is similar to the ‘sweep and retry’ procedure already described for the AFT Inpractice, most grid generators work with double precision and the condition (3.33) is seldommet
A related problem of degeneracy that may arise is linked to the creation of very flat
elements or ‘slivers’ (Cavendish et al (1985)) The calculation of the circumsphere for a
tetrahedron is given by the conditions
(x i− xc ) · (x i− xc ) = R2, i = 1, 4, (3.34)
yielding four equations for the four unknowns xc , R If the four points of the tetrahedron lie
on a plane, the solution is impossible (R → ∞) In such a case, the point to be inserted is
rejected and stored for later use (skip and retry)
3.7.2 DATA STRUCTURES TO MINIMIZE SEARCH OVERHEADS
The operations that could potentially reduce the efficiency of the algorithm to O(N 1.5 )or
even O(N2)are:
(a) finding all tetrahedra whose circumspheres contain a point (step B3);
(b) finding all the external faces of the void that results due to the deletion of a set oftetrahedra (step B5);
(c) finding the closest new points to a point (step D6);
(d) finding for any given location the values of generation parameters from the backgroundgrid and the sources (step D3)
The verb ‘find’ appears in all of these operations The main task is to design the bestdata structures for performing the search operations (a)–(d) as efficiently as possible Asbefore, many variations are possible here, and some of these data structures have alreadybeen discussed for the AFT The principal data structure required to minimize searchoverheads is the ‘element adjacent to element’ or ‘element surrounding element’ structureesuel(1:nfael,1:nelem)that stores the neighbour elements of each element Thisstructure, which was already discussed in Chapter 2, is used to march quickly through thegrid when trying to find the tetrahedra whose circumspheres contain a point Once a set
of elements has been marked for removal, the outer faces of this void can be obtained byinterrogatingesuel As the new points to be introduced are linked to the elements of thecurrent mesh (step D6),esuelcan also be used to find the closest new points to a point.Furthermore, the equivalentesuelstructure for the background grid can be used for fastinterpolation of the desired element size and shape
Trang 33.7.3 BOUNDARY RECOVERY
A major assumption that is used time and again to make the Delaunay triangulation processboth unique and fast is the Delaunay property itself: namely, that no other point shouldreside in the circumsphere of any tetrahedron This implies that in general during the gridgeneration process some of the tetrahedra will break through the surface The result is a meshthat satisfies the Delaunay property, but is not surface conforming (see Figure 3.26)
Figure 3.26 Non-body conforming Delaunay triangulation
In order to recover a surface conforming mesh, a number of techniques have beenemployed
(a) Extra point insertion By inserting points close to the surface, one can force the
triangulation to be surface conforming This technique has been used extensively byBaker (1987, 1989) for complete aircraft configurations It can break down for com-plex geometries, but is commonly used as a ‘first cut’ approach within more elaborateapproaches
(b) Algebraic surface recovery In this case, the faces belonging to the original surface point
distribution are tested one by one If any tetrahedron crosses these faces, local reordering
is invoked These local operations change the connectivity of the points in the vicinity
of the face in order to arrive at a surface-conforming triangulation The complete set ofpossible transformations can be quite extensive For this reason, surface recovery can takemore than half of the total time required for grid generation using the Delaunay technique
(Weatherill (1992), Weatherill et al (1993a)).
3.7.4 ADDITIONAL TECHNIQUES TO INCREASE SPEED
There are some additional techniques that can be used to improve the performance of theDelaunay grid generator The most important of these are the following
(a) Point ordering The efficiency of the DTT is largely dependent on the amount of time
taken to find the tetrahedra to be deleted as a new point is introduced For the main pointintroduction loop over the current set of tetrahedra, these elements can be found quickly fromeach current element and the neighbour listesuel It is advisable to order the first set ofboundary points so that contiguous points in the list are neighbours in space Once such anordering has been achieved, only a local set of tetrahedra needs to be inspected After one ofthe tetrahedra to be eliminated has been found, the rest are again determined viaesuel
(b) Vectorization of background grid/source information During each pass over the elements
introducing new points, the distance from the element centroid to the four nodes is comparedwith the element size required from the background grid and the sources These distances
Trang 4may all be computed at the same time, enabling vectorization and/or parallelization onshared-memory machines After each pass, the distance required for the new points is againcomputed in vector/parallel mode.
(c) Global h-refinement While the basic Delaunay algorithm is a scalar algorithm with
a considerable number of operations (search, compare, check), a global refinement of themesh (so-called h-refinement) requires far fewer operations Moreover, it can be completelyvectorized, and is easily ported to shared-memory parallel machines Therefore, the gridgeneration process can be made considerably faster by first generating a coarser mesh thathas all the desired variations of element size and shape in space, and then refining globallythis first mesh with classic h-refinement Typical speedups achieved by using this approachare 1:6 to 1:7 for each level of global h-refinement
(d) Multiscale point introduction The use of successive passes of point generation as outlined
above automatically results in a ‘multiscale’ point introduction For an isotropic mesh, eachsuccessive pass will result in five to seven times the number of points of the previous pass
In some applications, all point locations are known before the creation of elements begins.
Examples are remote sensing (e.g drilling data) and Lagrangian particle or particle–finite
element method solvers (Idelsohn et al (2003)) In this case, a spatially ordered list of points
for the fine mesh will lead to a large number of faces in the ‘star-shaped domain’ whenelements are deleted and re-formed The case shown in Figure 3.27 may be exaggerated
by the external shape of the domain (for some shapes and introduction patterns, nearly all
elements can be in the star-shaped domain) The main reason for the large number of elementstreated, and hence the inefficiency, is the large discrepancy in size ‘before’ and ‘after’ the lastpoint introduced In order to obtain similar element sizes throughout the mesh, and thus near-optimal efficiency, the points are placed in a bin Points are then introduced by considering, inseveral passes over the mesh, every eighth, fourth, second, etc., bin in each spatial dimensionuntil the list of points is exhausted
Figure 3.27 Large number of elements in a star-shaped domain
Sustained speeds in excess of 250 000 tetrahedra per minute have been achieved onthe Cray-YMP (Weatherill (1992), Weatherill and Hassan (1994), Marcum (1995)), andthe procedure has been ported to parallel machines (Weatherill (1994)) In some cases, theDelaunay circumsphere criterion is replaced by or combined with a min(max) solid anglecriterion (Joe (1991a,b), Barth (1995), Marcum and Weatherill (1995b)), which has beenshown to improve the quality of the elements generated For these techniques, a 3-D edge-swapping technique is used to speed up the generation process
3.7.5 ADDITIONAL TECHNIQUES TO ENHANCE RELIABILITY AND QUALITYThe Delaunay algorithm described above may still fail for some pathological cases Thefollowing techniques have been found effective in enhancing the reliability of Delaunay
Trang 5grid generators to a point where they can be applied on a routine basis in a productionenvironment.
(a) Avoidance of bad elements It is important not to allow any bad elements to be created
during the generation process These bad elements can wreak havoc when trying to introducefurther points at a later stage Therefore, if the introduction of a point creates bad elements,the point is skipped The quality of an element can be assessed while computing thecircumsphere
(b) Consistency in degeneracies The Delaunay criterion can break down for some
‘degen-erate’ point distributions One of the most common degeneracies arises when the points aredistributed in a regular manner If five or more points lie on a sphere (or four or more points lie
on a circle in two dimensions), the triangulation is not unique, since the ‘inner’ connectionsbetween these points can be taken in a variety of ways This and similar degeneracies do notpresent a problem as long as the decision as to whether a point is inside or outside the sphere
is consistent for all the tetrahedra involved
(c) Front-based point introduction When comparing 2-D grids generated by the AFT or
the DTT, the most striking difference lies in the appearance of the grids The Delaunaygrids always look more ‘ragged’ than the advancing front grids This is because the gridconnectivity obtained from Delaunay triangulations is completely free, and the introduction
of points in elements does not allow a precise control In order to improve this situation,
several authors (Merriam (1991), Mavriplis (1993), Müller et al (1993), Rebay (1993),
Marcum (1995)) have tried to combine the two methods These methods are called advancingfront Delaunay, and can produce extremely good grids that satisfy the Delaunay or min(max)criterion
(d) Beyond Delaunay The pure Delaunay circumsphere criterion can lead to a high
percent-age of degenerate elements called ‘slivers’ In two dimensions the probability of bad elements
is much lower than in three dimensions, and for this reason this shortcoming was ignored for
a while However, as 3-D grids became commonplace, the high number of slivers present intypical Delaunay grids had to be addressed The best way to avoid slivers is by relaxing theDelaunay criterion The star-shaped domain is modified by adding back elements whose faceswould lead to bad elements This is shown diagrammatically in Figure 3.28 The star-shapeddomain, which contains element A–C–B, would lead, after reconnection, to the bad (inverted)element A–B–P Therefore, element A–C–B is removed from the star-shaped domain, andadded back to the mesh before the introduction of the new point P This fundamental departure
from the traditional Delaunay criterion, first proposed by George et al (1990) to the chagrin
of many mathematicians and computational geometers, has allowed this class of unstructuredgrid generation algorithms to produce reliably quality grids It is a simple change, but hasmade the difference between a theoretical exercise and a practical tool
Trang 6A A A
Figure 3.28 The modified Delaunay algorithm
to circumvent any possible problems these irregular grids may trigger for field solvers, thegenerated mesh is optimized further in order to improve the uniformity of the mesh The mostcommonly used ways of mesh optimization are:
(a) removal of bad elements;
(b) Laplacian smoothing;
(c) functional optimization;
(d) selective mesh movement; and
(e) diagonal swapping
3.8.1 REMOVAL OF BAD ELEMENTS
The most straightforward way to improve a mesh containing bad elements is to get rid ofthem For tetrahedral grids this is particularly simple, as the removal of an internal edge doesnot lead to new element types for the surrounding elements Once the bad elements have beenidentified, they are compiled into a list and interrogated in turn An element is removed bycollapsing the points of one of the edges, as shown in Figure 3.29
Figure 3.29 Element removal by edge collapse
This operation also removes all the elements that share this edge It is advisable to make acheck of which of the points of the edge should be kept: point 1, point 2 or a point somewhere
on the edge (e.g the mid-point) This implies checking all elements that contain either point 1
or point 2 This procedure of removing bad elements is simple to implement and relativelyfast On the other hand, it can only improve mesh quality to a certain degree It is thereforeused mainly in a pre-smoothing or pre-optimization stage, where its main function is toeradicate from the mesh elements of very bad quality
Trang 73.8.2 LAPLACIAN SMOOTHING
A number of smoothing techniques are lumped under this name The edges of the lation are assumed to represent springs These springs are relaxed in time using an explicittimestepping scheme, until an equilibrium of spring forces has been established Because
triangu-‘globally’ the variations of element size and shape are smooth, most of the non-equilibriumforces are local in nature This implies that a significant improvement in mesh quality can beachieved rather quickly The force exerted by each spring is proportional to its length and isalong its direction Therefore, the sum of the forces exerted by all springs surrounding a pointcan be written as
x i = t 1
At the surface of the computational domain, no movement of points is allowed, i.e x= 0
Usually, the timestep (or relaxation parameter) is chosen as t = 0.8, and five to six timesteps
yield an acceptable mesh The application of the Laplacian smoothing technique can result
in inverted or negative elements The presence of even one element with a negative Jacobianwill render most field solvers inoperable Therefore, these negative elements are eliminated.For the AFT, it has been found advisable to remove not only the negative elements, but alsoall elements that share points with them This element removal gives rise to voids or holes
in the mesh, which are regridded using the AFT Another option, which can also be used forthe Delaunay technique, is the removal of negative elements using the techniques describedbefore
3.8.3 GRID OPTIMIZATION
Another way to improve a given mesh is by writing a functional whose magnitude depends
on the discrepancy between the desired and actual element size and shape (Cabello et al.
(1992)) The minimization of this functional, whose value depends on the coordinates of thepoints, is carried out using conventional minimization techniques These procedures represent
a sophisticated mesh movement strategy
3.8.4 SELECTIVE MESH MOVEMENT
Selective mesh movement tries to improve the mesh quality by performing a local movement
of the points If the movement results in an improvement of mesh quality, the movement iskept Otherwise, the old point position is retained The most natural way to move points isalong the directions of the edges touching them
With the notation of Figure 3.30, point i is moved in the direction x j− xiby a fraction ofthe edge length, i.e
Trang 8Figure 3.30 Mesh movement directions
After each of these movements, the quality of each element containing point i is checked.
Only movements that produce an improvement in element quality are kept The edge fraction
α is diminished for each pass over the elements Typical values for α are 0.02 ≤ α ≤ 0.10,
i.e the movement does not exceed 10% of the edge length This procedure, while general,
is extremely expensive for tetrahedral meshes This is because, for each pass over themesh, we have approximately seven edges for each point, i.e 14 movement directions andapproximately 22 elements (4 nodes per element, 5.5 elements per point) surrounding eachpoint to be evaluated for each of the movement directions, i.e approximately 308*NPOINelements to be tested To make matters worse, the evaluation of element quality typicallyinvolves arc-cosines (for angle evaluations), which consume a large amount of CPU time.The main strength of selective mesh movement algorithms is that they remove efficientlyvery bad elements They are therefore used only for points surrounded by bad elements, and
as a post-smoothing procedure
3.8.5 DIAGONAL SWAPPING
Diagonal swapping attempts to improve the quality of the mesh by reconnecting locally thepoints in a different way (Freitag and Gooch (1997)) Examples of possible 3-D swaps areshown in Figures 3.31 and 3.32
Figure 3.31 Diagonal swap case 2:3
An optimality criterion that has proven reliable is the one proposed by George andBorouchaki (1998),
Q=hmaxS
Trang 9A
Figure 3.32 Diagonal swap case 6:8
where hmax, S and V denote the maximum edge length, total surface area and volume of
a tetrahedron The number of cases to be tested can grow factorially with the number ofelements surrounding an edge Figure 3.33 shows the possibilities to be tested for four, fiveand six elements surrounding an edge Note the rapid (factorial) increase of cases with thenumber of edges
Figure 3.33 Swapping cases
Given that these tests are computationally intensive, considerable care is required whencoding a fast diagonal swapper Techniques that are commonly used include:
- treatment of bad (Q > Q ), untested elements only;
Trang 10- processing of elements in an ordered way, starting with the worst (highest chance ofreconnection);
- rejection of bad combinations at the earliest possible indication of worsening quality;
- marking of tested and unswapped elements in each pass
3.9 Optimal space-filling tetrahedra
Unlike the optimal (equilateral) triangle, the optimal (equilateral) tetrahedron shown in
Figure 3.34 is not space-filling All the edge angles have α = 70.52◦, a fact that does not
permit an integer division of the 360◦ required to surround any given edge Naturally,
the question arises as to which is the optimal space-filling tetrahedron (Fuchs (1998),
Naylor (1999), Bridson et al (2005)).
Figure 3.34 An ideal (equilateral) tetrahedron
One way to answer this question is to consider the deformation of a cube splitinto tetrahedra as shown in Figure 3.35 The tetrahedra are given by: tet1=1,2,4,5,tet2=2,4,5,6, tet3=4,8,5,6, tet4=2,3,4,6, tet5=4,3,8,6, tet6=3,7,8,6 This configuration is subjected to an affine transformation, whereby faces4,3,7,8and5,6,7,8are moved with an arbitrary translation vector The selection of the prisms
in the original cube retains generality by using arbitrary translation vectors for the facemovement In order to keep face1,2,4,3in the x, y plane, no movement is allowed in the z-direction for face4,3,7,8 Since the faces remain plane, the configuration is space-filling (and hence so are the tetrahedra)
The problem may be cast as an optimization problem with five unknowns, with the aim
of maximizing/minimizing quality criteria for the tetrahedra obtained Typical quality criteriainclude:
- equidistance of sides;
- maximization of the minimum angle;
- equalization of all angles;
- George’s h Area/Volume criterion (equation (3.38))
Trang 112
3 4
5
6
7 8
x y z
Figure 3.35 A cube subdivided into tetrahedra
An alternative is to invoke the argument that any space-filling tetradron must be similar when refined In this way, the tetrahedron can be regarded as coming from a previouslyrefined tetrahedron, thus filling space If we consider the refinement configuration shown in
self-Figure 3.36, the displacements in x, y of point 3 and the displacements in x, y, z of point 4
may be regarded as the design variables, and the problem can again be cast as an optimizationproblem
1
5
2
3 4
6 7
8
9 10
Figure 3.36 H-refinement of a tetrahedron
As expected, both approaches yield the same optimal space-filling tetrahedron, given by:
lmin = 1.0, lmax= 1.157,
α1= α2= α3= α4= 60.00◦,
α5 = α6= 90.00◦.
Trang 12One can show that this tetrahedron corresponds to the Delaunay triangulation hedrization) of the points of a body-centred cubic (BCC) lattice given by two Cartesian point
(tetra-distributions that have been displaced by (x/2, y/2, 2/2) (see Figure 3.37) Following Naylor (1999) this tetrahedron will be denoted as an isotet.
Figure 3.37 BCC lattice
3.10 Grids with uniform cores
The possibility to create near-optimal space-filling tetrahedra allows the generation of gridswhere the major portion of the volume is composed of such near-perfect elements, andthe approximation to a complex geometry is accomplished by a relatively small number of
‘truly unstructured’ elements These types of grids are highly suitable for wave propagationproblems (acoustics, electromagnetics), where mesh isotropy is required to obtain accurateresults The generation of such grids is shown in Figure 3.38
In a first step the surface of the computational domain is discretized with triangles of asize as prescribed by the user As described above, this is typically accomplished through
a combination of background grids, sources and element sizes linked to CAD entities In
a second step a mesh of space-filling tetrahedra (or even a Cartesian mesh subdividedinto tetrahedra) that has the element size of the largest element desired in the volume issuperimposed onto the volume From this point onwards this ‘core mesh’ is treated as anunstructured mesh This mesh is then adaptively refined locally so as to obtain the elementsize distribution prescribed by the user Once the adaptive core mesh is obtained, the elementsthat are outside the domain to be gridded are removed A number of techniques have beentried to make this step both robust and fast One such technique uses a fine, uniform voxel(bin, Cartesian) mesh that covers the entire computational domain All voxels that are crossed
by the surface triangulation are marked A marching cube (advancing layers) technique is thenused to mark all the voxels that are inside/outside the computational domain Any element
of the adaptive Cartesian mesh that covers a voxel marked as either crossed by the surfacetriangulation or outside the computational domain is removed This yields an additional list