1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Applied Computational Fluid Dynamics Techniques - Wiley Episode 2 Part 9 docx

25 185 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 226,2 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The change of shape in order to obtain a desired or optimal performance is denoted as optimal shape design.. The general optimization problem In order to optimize a process or shape, a m

Trang 1

(a) (b)

Figure 19.13 (a)–(d): LNG tanker fleet: evolution of the free surface

plane’ and the ship are moved, and the Navier–Stokes/VOF equations are integrated using thearbitrary Lagrangian–Eulerian frame of reference The LNG tanks are assumed to be 80%full This leads to an interesting interaction of the sloshing inside the tanks and the driftingship The mesh had approximatelynelem=2,670,000elements, and the integration to

3 minutes of real time took 20 hours on a PC (3.2 GHz Intel P4, 2 Gbytes RAM, Linux OS,Intel compiler) Figure 19.12(b) shows the evolution of the flowfield, and Figures 19.12(c)and (d) the body motion Note the change in position for the ship, as well as the roll motion

19.2.8.5 Drifting fleet of ships

This example shows the use of interface capturing to predict the effects of drift andshielding in waves for a group of ships The ships are the same LNG tankers as used inthe previous example, but the tanks are considered full The boundary conditions and meshsize distribution are similar to the ones used in the previous example The ships are treated

as free, floating objects subject to the hydrodynamic forces of the water The surface nodes

of the ships move according to a 6-DOF integration of the rigid-body motion equations.Approximately 30 layers of elements close to the ‘wave-maker plane’ and the ships aremoved, and the Navier–Stokes/VOF equations are integrated using the arbitrary Lagrangian–Eulerian frame of reference The mesh had approximately 10 million elements and the

Trang 2

integration to 6 minutes of real time took 10 hours on an SGI Altix using six processors(1.5 GHz Intel Itanium II, 8 Gbytes RAM, Linux OS, Intel compiler) Figures 19.13(a)–(d)show the evolution of the flowfield and the position of the ships Note how the ships in theback are largely unaffected by the waves as they are ‘blocked’ by the ships in front, and howthese ships cluster together due to wave forces.

19.2.9 PRACTICAL LIMITATIONS OF FREE SURFACE CAPTURING

Free surface capturing has been used to compute violent free surface flows with overturningwaves and changes in topology Even though in principle free surface capturing is able tocompute all interface problems, some practical limitations do remain The first and foremost

is accuracy For smooth surfaces, free surface fitting can yield far more accurate results withless gridpoints This is even more pronounced for cases where a free surface boundary layer

is present, as it is very difficult to generate anisotropic grids for the free surface capturingcases

Trang 3

20 OPTIMAL SHAPE AND PROCESS

DESIGN

The ability to compute flowfields implicitly implies the ability to optimize shapes andprocesses The change of shape in order to obtain a desired or optimal performance is

denoted as optimal shape design Due to its immense industrial relevance, the relative

maturity (accuracy, speed) of flow solvers and increasingly powerful computers, optimal

shape design has elicited a large body of research and development (Newman et al (1999),

Mohammadi and Pironneau (2001)) The present chapter gives an introduction to the keyideas, as well as the optimal techniques to optimize shapes and processes

20.1 The general optimization problem

In order to optimize a process or shape, a measurement of quality is required This is given

by one – or possibly many – so-called objective functions I , which are functions of design

variables or input parametersβ, as well as field unknowns u (e.g a flowfield)

and is subject to a number of constraints

- PDE constraints: these are the equations that describe the physics of the problem being

considered, and may be written as

Examples for objective functions are:

- inviscid drag (e.g for trans/supersonic airfoils): I=, pn x d;

- prescribed pressure (e.g for supercritical airfoils): I=, (p − p0)2d;

- weight (e.g for structures): I =, ;

- uniformity of magnetic field (electrodynamics): I=, (B − B0)2

Applied Computational Fluid Dynamics Techniques: An Introduction Based on Finite Element Methods, Second Edition.

Trang 4

Examples for PDE constraints R(u) are all the commonly used equations that describe the

relevant physics of the problem:

- fluids: Euler/Navier–Stokes equations;

- structures: elasticity/plasticity equations;

- electromagnetics: Maxwell equations;

- etc

Examples for geometric constraints g( β) are:

- wing area cross-section (stress, fuel): A > A0;

- trailing edge thickness (cooling): w > w0;

- width (manufacturability): w > w0;

- etc

Examples for physical constraints h(u) are:

- a constrained negative pressure gradient to avoid separation: s· ∇p > pg0;

- a constrained pressure to avoid cavitation: p > p0;

- a constrained shear stress to avoid blood haemolysis:|τ| < τ0;

- a constrained stress to avoid structural failure:|σ| < σ0;

- etc

Before proceeding, let us define with a higher degree of precision process and shapeoptimization With shape optimization, we can clearly define three different optimization

options (Jakiela et al (2000), Kicinger et al (2005)): topological optimization (TOOP), shape

optimization (SHOP) and sizing optimization (SIOP) These options mirror the typical designcycle (Raymer (1999)): preliminary design, detailed design and final design With reference

to Figure 20.1, we can define the following

Figure 20.1 Different types of optimization: (a) topology; (b) shape; (c) sizing

Trang 5

- Topological optimization The determination of an optimal material layout for an

engineering system TOOP has a considerable theoretical and empirical legacy in

structural mechanics (Bendsoe and Kikuchi (1988), Jakiela et al (2000), Kicinger et al.

(2005), Bendsoe (2004)), where the removal of material from zones where low stresslevels occur (i.e no load bearing function is being realized) naturally leads to thecommon goal of weight minimization For fluid dynamics, TOOP has been used for

internal flow problems (Borrvall and Peterson (2003), Hassine et al (2004), Moos et al (2004), Guest and Prévost (2006), Othmer et al (2006)).

- Shape optimization The determination of an optimal contour, or shape, for an

engi-neering system whose topology has been fixed This is the classic optimization task forairfoil/wing design, and has been the subject of considerable research and development

during the last two decades (Pironneau (1985), Jameson (1988, 1995), Kuruvila et al (1995), Reuther and Jameson (1995), Reuther et al (1996), Anderson and Venkata-

krishnan (1997), Elliott and Peraire (1997, 1998), Mohammadi (1997), Nielsen and

Anderson (1998), Medic et al (1998), Reuther et al (1999), Nielsen and Anderson

(2001), Mohammadi and Pironneau (2001), Dreyer and Matinelli (2001), Soto andLöhner (2001a,b, 2002))

- Sizing optimization The determination of an optimal size distribution for an

engineer-ing system whose topology and shape has been fixed A typical sizengineer-ing optimization

in fluid mechanics is the layout of piping systems for refineries Here, the topologyand shape of the pipes is considered fixed, and one is only interested in an optimalarrangement of the diameters

For all these types of optimization (TOOP, SHOP, SIOP) the parameter space is defined

by a set of variablesβ In order for any optimization procedure to be well defined, the set of

design variablesβ must satisfy some basic conditions (Gen (1997)):

- non-redundancy: any process, shape or object can be obtained by a one and only one

set of design variablesβ;

- legality: any set of design variables β can be realized as a process, shape or object;

- completeness: any process, shape or object can be obtained by a set of design variables

β; this guarantees that any process, shape or object can be obtained via optimization;

- causality (continuity): small variations in β lead to small changes in the process, shape

or object being optimized; this is an important requirement for the convergence ofoptimization techniques

Admittedly, at first sight all of these conditions seem logical and easy to satisfy However,

it has often been the case that an over-reliance on ‘black-box’ optimization has led users toemploy ill-defined sets of design variables

20.2 Optimization techniques

Given the vast range of possible applications, as well as their immediate benefit, it is notsurprising that a wide variety of optimization techniques have emerged In the simplest case,

Trang 6

the parameter space β is tested exhaustively An immediate improvement is achieved by

testing in detail only those regions were ‘promising minima’ have been detected This can bedone by emulating the evolution of life via ‘survival of the fittest’ criteria, leading to so-called

genetic algorithms With reference to Figure 20.2, for smooth functions I one can evaluate the gradient I , βand change the design in the direction opposite to the gradient In general, suchgradient techniques will not be suitable to obtain globally optimal designs, but can be used

to quickly obtain local minima In the following, we consider in more detail the recursiveexhaustive parameter scoping, genetic algorithms and gradient-based techniques Here, wealready note the rather surprising observation that with optimized gradient techniques andadjoint solvers the computational cost to obtain an optimal design is comparable to that ofobtaining a single flowfield (!)

E

I( )

EE

E

E

E

Figure 20.2 Local minimum via gradient-based optimization

20.2.1 RECURSIVE EXHAUSTIVE PARAMETER SCOPING

Suppose we are given the optimization problem

In order to norm the design variables, we define a range βmini ≤ β i ≤ β i

maxfor each design

variable An instantiation is then given by

β i = (1 − α i )βmini + α i βmaxi , (20.6)

implying I ( β) = I (β(α)) By working only with the α i, an abstract, non-dimensional,bounded ([0, 1]) setting is achieved, which allows for a large degree of commonality among

various optimization algorithms

The simplest (and most expensive) way to solve (20.1) is to divide each design parameterinto regular intervals, evaluate the cost function for all possible combinations, and retain the

best Assuming n d subdivisions per design variable and N design variables, this amounts to

n N d cost function evaluations Each one of these cost function evaluations corresponds to one

(or several) CFD runs, making this technique suitable only for problems where N is relatively small An immediate improvement is achieved by restricting the number of subdivisions n

Trang 7

to a manageable number, and then shrinking the parameter space recursively around the bestdesign While significantly faster, such a recursive procedure runs the risk of not finding theright minimum if the (unknown) local ‘wavelength’ of non-smooth functionals is smaller thanthe interval size chosen for the exhaustive search (see Figure 20.3).

E

I( )

E

Search Region 2 Search Region 1

Figure 20.3 Recursive exhaustive parameter scoping

The basic steps required for the recursive exhaustive algorithm can be summarized asfollows

Ex1 Define:

- Parameter space size forα [0,1];

- Nr of intervals (interval length h = 1/n d);

Ex2 while: h > hmin:

Ex3 Evaluate the cost function I ( β(α)) for all possible combinations of α i;

Ex4 Retain the combinationαoptwith the lowest cost function;

Ex5 Define new search range:opt− h/2, αopt+ h/2]

Ex6 Define new interval size: h := h/n

end while

20.2.2 GENETIC ALGORITHMS

Given the optimization problem (20.1), a simple and very general way to proceed is bycopying what nature has done in the course of evolution: try variations ofβ and keep the

ones that minimize (i.e improve) the cost function I ( β, u(β)) This class of optimization

techniques are called genetic algorithms (Goldberg (1989), Deb (2001), De Jong (2006)) or

evolutionary algorithms (Schwefel (1995)) The key elements of these techniques are:

- a fitness measure, given by I ( β, u(β)), to measure different designs against each other;

- chromosome coding, to parametrize the design space given by β;

- population size required to achieve robust design;

- selection, to decide which members of the present/next generation are to be kept/used

for reproductive purposes; and

- mutation, to obtain ‘offspring’ not present in the current population.

Trang 8

The most straightforward way to code the design variables into chromosomes is by definingthem to be functions of the parameters 0≤ α i ≤ 1 As before, an instantiation is given by

β i = (1 − α i )βmini + α i βmaxi (20.7)The population required for a robust selection needs to be sufficiently large A typical

choice for the number of individuals in the population M as compared to the number of chromosomes (design variables) N is

replacement strategy is written as (µ + λ) In order to achieve a monotonic improvement

in designs, the (µ + λ) strategy is typically used, and a percentage of ‘best individuals’

of each generation is kept (typical value, c k = O(10%)) Furthermore, a percentage of

‘worst individuals’ are not admitted for reproductive purposes (typical value, c c = O(75%)) Each new individual is generated by selecting (randomly) a pair i, j from the allowed list

of individuals and combining the chromosomes randomly Of the many possible ways tocombine chromosomes, we mention the following

(a) Chromosome splicing A random crossover point l is selected from the design ters The chromosomes for the new individual that fall below l are chosen from i, the rest from j :

parame-α k = α i

k , 1≤ k ≤ l,

α k = α j

(b) Arithmetic pairing A random pairing factor −ξ < γ < 1 + ξ is selected and applied

to all variables of the chromosomes in a uniform way The chromosomes for the newindividual are given by

Trang 9

Note that chromosome splicing and arithmetic pairing constitute particular cases of randompairing The differences between these pairings can be visualized by considering the 2-D

search space shown in Figure 20.4 If we have two points x1,x2which are being paired to

form a new point x3, then chromosome splicing, arithmetic pairing and random pairing lead

to the regions shown in Figure 20.4 In particular, chromosome splicing only leads to two

new possible point positions, arithmetic pairing to points along the line connecting x1,x2

and random pairing to points inside the extended bounding box given by x1,x2

x1

x2

Arithmetic Pairing Chromosome Splicing

Random Pairing

Figure 20.4 Regions for possible offspring from x1,x2

A population that is not modified continuously by mutations tends to become uniform,implying that the optimization may end in a local minimum Therefore, a mutation frequency:

c m = O(0.25/N) has to be applied to the new generation, modifying chromosomes randomly.

The basic steps required per generation for genetic algorithms can be summarized as follows

Ga1 Evaluate the fitness function I ( β(α)) for all individuals;

Ga2 Sort the population in ascending (descending) order of I ;

Ga3 Retain the c kbest individuals for the next generation;

Ga4 while: Population incomplete

- Select randomly a pair i, j from c clist

- Obtain random pairing factors 0.0 < γ k < 1.2

- Obtain the chromosomes for the new individual:

α = (1 − γ ) α i + γ α j

end while

For cases with a single, defined optimum, one observes that:

- the best candidate does not change over many generations – only the occasionalmutation will yield an improvement, and thereafter the same pattern of unchangingbest candidate will repeat itself;

- the top candidates (e.g top 25% of population) become uniform, i.e the genetic poolcollapses

Such a behaviour is easy to detect, and much faster convergence to the defined optimum can

be achieved by ‘randomizing’ the population If the chromosomes of any two individuals i, j

are such that

the difference (distance) d ijis artificially enlarged by adding/subtracting a random multiple

of to one of the chromosomes This process is repeated for all pairs i, j until none of

Trang 10

them satisfies (20.12) As the optimum is reached, one again observes that the top candidateremains unchanged over many generations The reason for this is that an improvement in

the cost function can only be obtained with variations that are smaller than When such a behaviour is detected, the solution is to reduce and continue Typical reduction factors are 0.1–0.2 Given that 0 < α < 1, a stopping criterion is automatically achieved for such cases:

when the value of is smaller than a preset threshold, convergence has been achieved.

The advantages of genetic algorithms are manifold: they represent a completely generaltechnique, able to go beyond local minima and hence are suitable for ‘rough’ cost functions

I with multiple local minima Genetic algorithms have been used on many occasions forshape optimization (see, e.g., Gage and Kroo (1993), Crispin (1994), Quagliarella and Cioppa(1994), Quagliarella (1995), Doorly (1995), Periaux (1995), Yamamoto and Inoue (1995),

Vicini and Quagliarella (1997, 1999), Obayashi (1998), Obayashi et al (1998), Zhu and Chan (1998), Naujoks et al (2000), Pulliam et al (2003)) On the other hand, the number of cost function evaluations (and hence field solutions) required is of O(N2) , where N denotes the

number of design parameters The speed of convergence can also be strongly dependent onthe crossover, mutation and selection criteria

Given the large number of instantiations (i.e detailed, expensive CFD runs) required bygenetic algorithms, considerable efforts have been devoted to reduce this number as much aspossible Two main options are possible here:

- keep a database of all generations/instantiations, and avoid recalculation of regions

already visited/ explored;

- replace the detailed CFD runs by approximate models

Note that both of these options differ from the basic modus operandi of natural selection The

first case would imply selection from a semi-infinite population without regard to the finitelife span of organisms The second case replaces the actual organism by an approximatemodel of the same

20.2.2.1 Tabu search

By keeping in a database the complete history of all individuals generated and evaluated sofar, one is in a position to reject immediately offspring that:

- are too close to individuals already in the database;

- fall into regions populated by individuals whose fitness is low

The regions identified as unpromising are labelled as ‘forbidden’ or ‘tabu’, hence the name

20.2.2.2 Approximate models

As stated before, considerable efforts have been devoted to the development of approximatemodels The key idea is to use these (cheaper) models to steer the genetic algorithm into thepromising regions of the parameter space, and to use the expensive CFD runs as seldomly

as possible (Quagliarella and Chinnici (2005)) The approximate models can be grouped intothe following categories

Trang 11

- Reduced complexity models (RCMs) These replace the physical approximations in

the expensive CFD run by simpler physical models Examples of RCMs are tial solvers used as approximations to Euler/RANS solvers, or Euler/boundary-layersolvers used as approximations to RANS solvers

poten Reduced degree of freedom models (RDOFMs) These keep the physical approximapoten

approxima-tions of the expensive CFD run, but compute it on a mesh with far fewer degrees offreedom Examples are all those approximate models that use coarse grid solutions toexplore the design space

- General purpose approximators (GPAs) These use approximation theory to

extrapo-late the solutions obtained so far into unexplored regions of the parameter space Themost common of these are:

- response surfaces, which fit a low-order polynomial through a vicinity of datapoints (Giannakoglov (2002));

- neural networks, that are trained to reproduce the input–output obtained from the

cost functions evaluated so far (Papila et al (1999));

- proper orthogonal decompositions (LeGresley and Alonso (2000));

- kriging (Simpson et al (1998), Kumano et al (2006));

20.2.2.4 Pareto front

While an optimality criterion such as the one given by (20.1) forms a good basis forthe derivation of optimization techniques, many design problems are defined by severalobjectives One recourse is to modify (20.1) by writing the cost function as a sum of differentcriteria:

I ( β, u(β)) =

i

c i I i ( β, u(β)). (20.13)The important decision a designer or analyst has to make before starting an optimization is

to select the criteria I i and the weights c i Given that engineering is typically a compromise

of different, conflicting criteria (styling, performance, cost, etc.), this step is not well defined

This implies that a proper choice of weights c i before optimization may be difficult.Genetic algorithms can handle multiple design objectives concurrently using the concept

of non-dominated individuals (Goldberg (1989), Deb (2001)) Given multiple objectives

I i ( β, u(β)), i = 1, m, the objective vector of individual k is partially less than the objective

Trang 12

vector of individual k if

I i ( β k , u( β k )) ≤ I i ( β l , u( β l )) i = 1, m and ∃ j \I j ( β k , u( β k )) < I j ( β l , u( β l )).

(20.14)All individuals that satisfy these conditions with respect to all other individuals are said to

be non-dominated The key idea is to set the reproductive probabilities of all non-dominatedindividuals equally high Therefore, before a new reproductive cycle starts, all individualsare ordered according to their optimality In a series of passes, all non-dominated individualsare taken out of the population and assigned progressively lower probabilities and/or rankings(see Figure 20.5) The net effect of this sorting is a population that drifts towards the so-calledPareto front of optimal design Additional safeguards in the form of niche formation andmating restrictions are required in order to prevent convergence to a single point (Deb (2001)).Note that one of the key advantages of such a Pareto ranking is that a multi-objective vector

is reduced to a scalar: no weights are required, and the Pareto front gives a clear insight intothe compromises that drive the design

Figure 20.5 Pareto fronts for design problem with two objective functions

The visualization of Pareto fronts for higher numbers of criteria is the subject of currentresearch (e.g via data mining concepts (Obayashi (2002))

20.2.3 GRADIENT-BASED ALGORITHMS

The second class of optimization techniques is based on evaluating gradients of I ( β, u(β)).

From a Taylor series expansion we have

The process has been sketched in Figure 20.2 As noted by Jameson (1995), a smoothed

gradient can often be employed to speed up convergence Denoting the gradient by G= I , β,

a simple Laplacian smoothing can be achieved via

Ngày đăng: 06/08/2014, 02:20