1. Trang chủ
  2. » Giáo án - Bài giảng

on the utility of gpu accelerated high order methods for unsteady flow simulations a comparison with industry standard tools

41 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề On the Utility of GPU Accelerated High-Order Methods for Unsteady Flow Simulations: A Comparison with Industry-Standard Tools
Tác giả B.C. Vermeire, F.D. Witherden, P.E. Vincent
Trường học Imperial College London
Chuyên ngành Aeronautics
Thể loại Article
Năm xuất bản 2017
Định dạng
Số trang 41
Dung lượng 9,86 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In this study we systematically compare accuracy and cost ofthe high-order Flux Reconstruction solver PyFR running on GPUs and theindustry-standard solver STAR-CCM+ running on CPUs when

Trang 1

Accepted Manuscript

On the utility of GPU accelerated high-order methods for unsteady flow simulations: A

comparison with industry-standard tools

B.C Vermeire, F.D Witherden, P.E Vincent

PII: S0021-9991(16)30713-6

DOI: http://dx.doi.org/10.1016/j.jcp.2016.12.049

Reference: YJCPH 7051

To appear in: Journal of Computational Physics

Received date: 29 April 2016

Revised date: 27 October 2016

Accepted date: 26 December 2016

Please cite this article in press as: B.C Vermeire et al., On the utility of GPU accelerated high-order methods for unsteady flow

simulations: A comparison with industry-standard tools, J Comput Phys (2017), http://dx.doi.org/10.1016/j.jcp.2016.12.049

This is a PDF file of an unedited manuscript that has been accepted for publication As a service to our customers we are providing this early version of the manuscript The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Trang 2

On the Utility of GPU Accelerated High-Order Methods for Unsteady Flow Simulations: A Comparison with Industry-Standard Tools

B C Vermeire∗, F D Witherden, and P E Vincent

Department of Aeronautics, Imperial College London, SW7 2AZ

January 4, 2017

AbstractFirst- and second-order accurate numerical methods, implemented forCPUs, underpin the majority of industrial CFD solvers Whilst this tech-nology has proven very successful at solving steady-state problems via aReynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear High-order methods forunstructured grids and GPU accelerators have been proposed as an enablingtechnology for unsteady scale-resolving simulations of flow over complexgeometries In this study we systematically compare accuracy and cost ofthe high-order Flux Reconstruction solver PyFR running on GPUs and theindustry-standard solver STAR-CCM+ running on CPUs when applied to arange of unsteady flow problems Specifically, we perform comparisons ofaccuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flowover an SD7003 aerofoil We consider two configurations of STAR-CCM+: asecond-order configuration, and a third-order configuration, where the latterwas recommended by CD-Adapco for more effective computation of unsteadyflow problems Results from both PyFR and Star-CCM+ demonstrate thatthird-order schemes can be more accurate than second-order schemes for agiven cost e.g going from second- to third-order, the PyFR simulations of the

EV and TGV achieve 75x and 3x error reduction respectively for the same orreduced cost, and STAR-CCM+ simulations of the cylinder recovered wakestatistics significantly more accurately for only twice the cost Moreover,advancing to higher-order schemes on GPUs with PyFR was found to offereven further accuracy vs cost benefits relative to industry-standard tools

∗ Corresponding author; e-mail b.vermeire@imperial.ac.uk

Trang 3

1 Introduction

Industrial computational fluid dynamics (CFD) applications require numericalmethods that are concurrently accurate and low-cost for a wide range of applications.These methods must be flexible enough to handle complex geometries, which isusually achieved via unstructured mixed element meshes Conventional unstructuredCFD solvers typically employ second-order accurate spatial discretizations Thesesecond-order schemes were developed primarily in the 1970s to 1990s to improveupon the observed accuracy limitations of first-order methods [1] While second-order schemes have been successful for steady state solutions, such as using theReynolds Averaged Navier-Stokes (RANS) approach, there is evidence that higher-order schemes can be more accurate for scale-resolving simulations of unsteadyflows [1] Recently, there has been a surge in the development of high-orderunstructured schemes that are at least third-order accurate in space Such methodshave been the focus of ongoing research, since there is evidence they can provideimproved accuracy at reduced computational cost for a range of applications, whencompared to conventional second-order schemes [1] Such high-order unstructuredschemes include the discontinuous Galerkin (DG) [2,3], spectral volume (SV) [4],and spectral difference (SD) [5,6] methods, amongst others One particular high-order unstructured method is the flux reconstruction (FR), or correction procedurevia reconstruction (CPR), scheme first introduced by Huynh [7] This scheme

is particularly appealing as it unifies several high-order unstructured numericalmethods within a common framework Depending on the choice of correctionfunction one can recover the collocation based nodal DG, SV, or SD methods,

at least for the case of linear equations [7,8] In fact, a wide range of schemescan be generated that are provably stable for all orders of accuracy [9] The FRscheme was subsequently extended to mixed element types by Wang and Gao [8],three-dimensional problems by Haga and Wang [10], and tetrahedra by Williamsand Jameson [11] These extensions have allowed the FR scheme to be usedsuccessfully for the simulation of transitional and turbulent flows via scale resolvingsimulations, such as large eddy simulation (LES) and direct numerical simulation(DNS) [12,13,14]

Along with recent advancements in numerical methods, there have been cant changes in the types of hardware available for scientific computing Conven-tional CFD solvers have been written to run on large-scale shared and distributedmemory clusters of central processing units (CPUs), each with a small number

signifi-of scalar computing cores per device However, the introduction signifi-of acceleratorhardware, such as graphical processing units (GPUs), has led to extreme levels ofparallelism with several thousand compute “cores” per device One advantage ofGPU computing is that, due to such high levels of parallelism, GPUs are typicallycapable of achieving much higher theoretical peak performance than CPUs at similarprice points This makes GPUs appealing for performing CFD simulations, whichoften require large financial investments in computing hardware and associatedinfrastructure

Trang 4

The objective of the current work is to quantify the cost and accuracy benefitsthat can be expected from using high-order unstructured schemes deployed on GPUsfor scale-resolving simulations of unsteady flows This will be performed via acomparison of the high-order accurate open-source solver PyFR [15] running onGPUs with the industry-standard solver STAR-CCM+ [16] running on CPUs forfour relevant unsteady flow problems PyFR was developed to leverage synergiesbetween high-order accurate FR schemes and GPU hardware [15] We consider twoconfigurations of STAR-CCM+: a second-order configuration, and a third-orderconfiguration, where the latter was recommended by CD-Adapco for more effectivecomputation of unsteady flow problems Full configurations for all STAR-CCM+simulations are provided as electronic supplementary material We will comparethese configurations on a set of test cases including a benchmark isentropic vortexproblem and three cases designed to test the solvers for scale resolving simulations

of turbulent flows These are the types of problems that current industry-standardtools are known to find challenging [17], and for which high-order schemes haveshown particular promise [1] The utility of high-order methods in other flow-regimes, such as those involving shocks or discontinuities, is still an open researchtopic In this study we are interested in quantifying the relative cost of each solver interms of total resource utilization on equivalent era hardware, as well as quantitativeaccuracy measurements based on suitable error metrics, for the types of problemsthat high-order methods have shown promise

The paper is structured as follows In section 2 we will briefly discuss thethe software packages being compared In section 3 we will discuss the hardwareconfigurations each solver is being run on, including a comparison of monetarycost and theoretical performance statistics In section 4 we will discuss possibleperformance metrics for comparison and, in particular, the resource utilizationmetric used in this study In section 5 we will present several test cases and resultsobtained with both PyFR and STAR-CCM+ In particular, we are interested inisentropic vortex advection, Taylor-Green vortex breakdown, turbulent flow over acircular cylinder, and turbulent flow over an SD7003 aerofoil Finally, in section 6

we will present conclusions based on these comparisons and discuss implicationsfor the adoption of high-order unstructured schemes on GPUs for industrial CFD

2 Solvers

PyFR [15] (http://www.pyfr.org/) is an open-source Python-based framework forsolving advection-diffusion type problems on streaming architectures using the fluxreconstruction (FR) scheme of Huynh [7] PyFR is platform portable via the use

of a domain specific language based on Mako templates This means PyFR canrun on AMD or NVIDIA GPUs, as well as traditional CPUs A brief summary

of the functionality of PyFR is given in Table 1, which includes mixed-elementunstructured meshes with arbitrary order schemes Since PyFR is platform portable,

Trang 5

it can run on CPUs using OpenCL or C/OpenMP, NVIDIA GPUs using CUDA

or OpenCL, AMD GPUs using OpenCL, or heterogeneous systems consisting of

a mixture of these hardware types [18] For the current study we are runningPyFR version 0.3.0 on NVIDIA GPUs using the CUDA backend, which utilizescuBLAS for matrix multiplications We will also use an experimental version ofPyFR 0.3.0 that utilizes the open source linear algebra package GiMMiK [19] Apatch to go from PyFR v0.3.0 to this experimental version has been provided aselectronic supplementary material GiMMiK generates bespoke kernels, i.e writtenspecifically for each particular operator matrix, at compile time to accelerate matrixmultiplication routines The cost of PyFR 0.3.0 with GiMMiK will be comparedagainst the release version of PyFR 0.3.0 to evaluate its advantages for sparseoperator matrices

Table 1 Functionality summary of PyFR v0.3.0

Systems Compressible Euler, Navier StokesDimensionality 2D, 3D

Element Types Triangles, Quadrilaterals, Hexahedra,

Prisms, Tetrahedra, PyramidsPlatforms CPU , GPU (Nvidia and AMD)Spatial Discretization Flux Reconstruction

Temporal Discretization Explicit

Precision Single, Double

STAR-CCM+ [16] is a CFD and multiphysics solution package based on the finitevolume method It includes a CAD package for generating geometry, meshingroutines for generating various mesh types including tetrahedral and polyhedral, and

a multiphysics flow solver A short summary of the functionality of STAR-CCM+

is given in Table 2 It supports first, second, and third-order schemes in space Inaddition to an explicit method, STAR-CCM+ includes support for implicit temporalschemes Implicit schemes allow for larger global time-steps at the expense ofadditional inner sweeps to converge the unsteady residual For the current study weuse the double precision version STAR-CCM+9.06.011-R8 This version is usedsince PyFR also runs in full double precision, unlike the mixed precision version ofSTAR-CCM+

Trang 6

Table 2 Functionality summary of STAR-CCM+ v9.06

Systems Compressible Euler, Navier Stokes, etc.Dimensionality 2D, 3D

Element Types Tetrahedral, Polyhedral, etc

Platforms CPUSpatial Discretization Finite Volume

Temporal Discretization Explicit, Implicit

Precision Mixed, Double

3 Hardware

PyFR is run on either a single or multi-GPU configuration of the NVIDIA TeslaK20c For running STAR-CCM+ we use either a single Intel Xeon E5-2697 v2CPU, or a cluster consisting of InfiniBand interconnected Intel Xeon X5650 CPUs.The specifications for these various pieces of hardware are provided in Table 3 Thepurchase price of the Tesla K20c and Xeon E5-2697 v2 are similar, however, theTesla K20c has a significantly higher peak double precision floating point arithmeticrate and memory bandwidth The Xeon X5650, while significantly cheaper than theXeon E5-2697 v2, has a similar price to performance ratio when considering boththe theoretical peak arithmetic rate and memory bandwidth

Table 3 Hardware specifications, approximate prices taken as of date written

Tesla K20c Xeon E5-2697 v2 Xeon X5650Arithmetic (GFLOPS/s) 1170 280 64.0Memory Bandwidth (GB/s) 208 59.7 32.0CUDA Cores/ Cores 2496 12 6

Trang 7

STAR-TauBench available for normalizing the PyFR simulations Also, this approach doesnot take into account the price of different types of hardware While energy con-sumption is a relevant performance metric, it relies heavily on system architecture,peripherals, cooling systems, and other design choices that are beyond the scope ofthe current study.

In the current study we introduce a cost metric referred to as resource utilization.This is measured as the product of the cost of the hardware being used for asimulation in £, and the amount of time that hardware has been utilized in seconds.This gives a cost metric with the units £×Seconds Therefore, resource utilizationincorporates both the price to performance ratio of a given piece of hardware, andthe ability of the solver to use it efficiently to complete a simulation in a givenamount of time This effectively normalizes the computational cost by the price ofthe hardware used

Two fundamental constraints for CFD applications are the available budget forpurchasing computer hardware and the maximum allowable time for a simulation

to be completed Depending on application requirements, most groups are limited

by one of these two constraints When the proposed resource utilization metric isconstrained with a fixed capital expenditure budget it becomes directly correlated

to total simulation time If constrained by a maximum allowable simulation time,resource utilization becomes directly correlated to the required capital expenditure.Therefore, resource utilization is a useful measurement for two of the dominantconstraints for CFD simulations, total upfront cost and total simulation time Anysolver and hardware combination that completes a simulation with a comparativelylower resource utilization can be considered faster, if constrained by a hardwareacquisition budget, or cheaper, if constrained by simulation time

5 Test Cases

5.1 Isentropic Vortex Advection

5.1.1 Background

Isentropic vortex advection is a commonly used test case for assessing the accuracy

of flow solvers for unsteady inviscid flows using the Euler equations [1] Thisproblem has an exact analytical solution at all times, which is simply the advection

of the steady vortex with the mean flow This allows us to easily assess errorintroduced by the numerical scheme over long advection periods The initial flow

Trang 8

field for isentropic vortex advection is specified as [1,15]

where ρ is the density, u and v are the velocity components, p is the pressure,

f = (1 − x2− y2)/2R2, S = 13.5 is the strength of the vortex, M = 0.4 is the free-stream Mach number, R= 1.5 is the radius, and γ = 1.4

For PyFR we use a K20c GPU running a single partition We use a 40× 40two-dimensional domain with periodic boundary conditions on the upper and loweredges and Riemann invariant free stream boundaries on the left and right edges.This allows the vortex to advect indefinitely through the domain, while spuriouswaves are able to exit through the lateral boundaries The simulations are run in

total to t = 2000, which corresponds to 50t c where t cis a domain flow throughtime A five-stage fourth-order adaptive Runge-Kutta scheme [20,21,22] is usedfor time stepping with maximum and relative error tolerances of 10−8 We consider

P1toP5quadrilateral elements with a nominal 4802solution points The number ofelements and solution points for each scheme are shown in Table 4 All but theP4

simulation have the nominal number of degrees of freedom, while theP4simulationhas slightly more due to constraints on the number of solution points per element.Solution and flux points are located at Gauss-Legendre points and Rusanov [15]fluxes are used at the interface between elements

With STAR-CCM+ we use all 12 cores of the Intel Xeon E5-2697 v2 CPUwith default partitioning We also use a 40× 40 two-dimensional domain withperiodic boundary conditions on the upper and lower edges The left and rightboundaries are specified as free stream, again to let spurious waves exit the domain.For the second-order configuration we use the coupled energy and flow solversettings We use an explicit temporal scheme with an adaptive time step based

on a fixed Courant number of 1.0 We also test the second-order implicit solverusing a fixed time-step ten times greater than the average explicit step size Theideal gas law is used as the equation of state with inviscid flow and a second-orderspatial discretization All other solver settings are left at their default values For thethird-order configuration a Monotonic Upstream-Centered Scheme for ConservationLaws (MUSCL) scheme is used with coupled energy and flow equations, the idealgas law, and implicit time-stepping with a fixed time-stepΔt = 0.025 Once again,

the number of elements and solution points are given in Table 4 We perform oneset of STAR-CCM+ simulations with the same total number of degrees of freedom

as the PyFR results A second set of simulations were also performed using the

Trang 9

second-order configuration on a grid that was uniformly refined by a factor of two

STAR 2nd-Order Implicit 4802 4802

STAR 2nd-Order Explicit 9602 9602

STAR 2nd-Order Implicit 9602 9602

STAR 3rd-Order Implicit 4802 4802

PyFRP1 Explicit 2402 4802PyFRP2 Explicit 1602 4802PyFRP3 Explicit 1202 4802PyFRP4 Explicit 1002 5002PyFRP5 Explicit 802 4802

To evaluate the accuracy of each method, we consider the L2norm of the densityerror in a 4× 4 region at the center of the domain This error is calculated each timethe vortex returns to the origin as per Witherden et al [15] Therefore, the L2error

advection period STAR-CCM+ does not allow for the solution to be exported at anexact time with the explicit flow solver, so the closest point in time is used insteadand the exact solution is shifted to a corresponding spatial location to match To get

a good approximation of the true L2error we use a 196 point quadrature rule withineach element

5.1.2 Results

Contours of density for the PyFRP5and the 4802degree of freedom STAR-CCM+

simulations are shown in Figure 1 at t = t c , t = 5t c , t = 10t c , and t = 50t c It

is evident that all three simulations start with the same initial condition at t = 0.Some small stepping is apparent in both STAR-CCM+ initial conditions due to theprojection of the smooth initial solution onto the piecewise constant basis used bythe finite volume scheme For PyFRP5all results are qualitatively consistent withthe exact initial condition, even after 50 flow through times The results using thesecond-order STAR-CCM+ configuration at t = t calready show some diffusion,

Trang 10

which is more pronounced by t = 5t c and asymmetrical in nature By t = 50t cthesecond-order STAR-CCM+ results are not consistent with the exact solution Thelow density vortex core has broken up and been dispersed to the left hand side of thedomain, suggesting a non-linear build up of error at the later stages of the simulation.The third-order STAR-CCM+ configuration has significantly less dissipation than

the second-order configuration However, by t = 50t cthe vortex has moved up and

to the left of the origin

Plots of the L2norm of the density error against resource utilization are shown

in Figure 2 to Figure 4 for t = t c , t = 5t c , and t = 50t c, respectively, for allsimulations After one flow through of the domain, as shown in Figure 2, all ofthe PyFR simulations outperform all of the STAR-CCM+ simulations in terms ofresource utilization by approximately an order of magnitude The simulations withGiMMiK outperform them by an even greater margin The PyFR simulations areall more accurate, with theP5scheme≈ 5 orders of magnitude more accurate thanSTAR-CCM+ This trend persists at t = 5t c and t = 50t c, the PyFR simulationsare approximately an order of magnitude cheaper than the 4802degree of freedomSTAR-CCM+ simulations and are significantly more accurate Interestingly, thePyFRP1toP3simulations require approximately the same resource utilization,suggesting greater accuracy can be achieved for no additional computational cost.Also, we find that the PyFR simulations using GiMMiK are between 20% and 35%times less costly than the simulations without it, depending on the order of accuracy

We also observe that simulations using the second-order STAR-CCM+ ration with implicit time-stepping have significantly more numerical error than theexplicit schemes, but are less expensive due to the increased allowable time-step

configu-size However, this increase in error is large enough that by t = 5t c the implicitschemes have saturated to the maximum error level atσ ≈ 1E0 Increasing the mesh

resolution using the implicit scheme has little to no effect on the overall accuracy ofthe solver, suggesting that it is dominated by temporal error Increasing the resolu-tion for the explicit solver does improve the accuracy at all times in the simulation,however, this incurs at least an order of magnitude increase in total computationalcost By extrapolating the convergence study using the explicit scheme, we canconclude that an infeasibly high resource utilization would be required to achievethe same level of accuracy with the second-order STAR-CCM+ configuration as thehigher-order PyFR simulations

5.2 DNS of the Taylor Green Vortex

5.2.1 Background

Simulation of the Taylor-Green vortex breakdown using the compressible Stokes equations has been undertaken for the comparison of high-order numericalschemes It has been a test case for the first, second, and third high-order work-shops [1] It is an appealing test case for comparing numerical methods due to itssimple initial and boundary conditions, as well as the availability of spectral DNS

Trang 11

Navier-Figure 1 Contours of density at t = 0, t = t c , t = 5t c , t = 50t cfor isentropic vortexadvection with explicit PyFR P5 and the second-order explicit and third-orderimplicit STAR-CCM+ configurations.

Trang 12

Figure 2 Density error for isentropic vortex advection at t = t c.

Figure 3 Density error for isentropic vortex advection at t = 5t c

Figure 4 Density error for isentropic vortex advection at t = 50t c

Trang 13

results for comparison from van Rees et al [23].

The initial flow field for the Taylor-Green vortex is specified as [1]

u = +U0sin(x/L) cos(y/L) cos(z/L),

v = −U0cos(x /L) sin(y/L) cos(z/L),

w= 0,

p = P0+ρo U

2 0

16 (cos(2x /L) + cos(2y/L)) (cos(2z/L) + 2) ,

ρ = p

RT0

,

(3)

where T0and U0are constants specified such that the flow Mach number based on

U0is Ma= 0.1, effectively incompressible The domain is a periodic cube withthe dimensions−πL ≤ x, y, z ≤ +πL For the current study we consider a Reynolds number Re = 1600 based on the length scale L and velocity scale U0 The test case

is run to a final non-dimensional time of t = 20t c where t c = L/U0

We are interested in the temporal evolution of kinetic energy integrated over thedomain

and the dissipation rate of this energy defined as = −dE k

dt We are also interested inthe temporal evolution of enstrophy

whereω is the vorticity For incompressible flows the dissipation rate can be related

to the enstrophy by = 2ρμoε [1,23] We can also define three different L∞error

norms First the error in the observed dissipation rate



t,

maxdE

k dt

Trang 14

high-For PyFR we useP1toP8schemes with structured hexahedral elements Eachmesh is generated to provide ∼ 2563degrees of freedom, as shown in Table 5,based on the number of degrees of freedom per element The interface fluxes areLDG [24] and Rusanov type [15] Gauss-Legendre points are used for the solutionpoint locations within the elements and as flux point locations on the faces of theelements A five-stage fourth-order adaptive Runge-Kutta scheme [20,21,22] isused with maximum and relative error tolerances of 10−6 The simulations are run

on three NVIDIA K20c GPUs with the exception of theP1simulation, which wasrun on six GPUs due to the available memory per card We perform two sets ofsimulations, the first with the release version of PyFR 0.3.0 and the second with theexperimental version of PyFR 0.3.0 including GiMMiK

For STAR-CCM+ we generate a structured mesh of 2563hexahedral elementsvia the directed meshing algorithm This gives a total of 2563degrees of freedom

as shown in Table 5, consistent with the number required for DNS [23] For thesecond-order configuration we use the explicit time-stepping scheme providedwith STAR-CCM+ with a constant CFL number of unity We use a second-orderspatial discretization with coupled energy and flow equations We use an ideal gasformulation and laminar viscosity, since we expect to resolve all length and timescales in the flow Periodic boundary conditions are used on all faces and all othersettings are left at their default values The third-order configuration is similar tothe second-order configuration, however, we use the third-order MUSCL schemefor spatial discretization and second-order implicit time-stepping withΔt = 0.01t c.The second-order configuration is run using all 12 cores of the Intel Xeon E5-2697v2 CPU and the built in domain partitioning provided with STAR-CCM+ Due toincreased memory requirements, the third-order configuration of STAR-CCM+ isrun on five nodes of an Infiniband interconnected cluster of Intel Xeon X5650 CPUs

5.2.2 Results

Isosurfaces of Q-criterion are shown in Figure 5 to Figure 8 at various instantsfrom simulations using the PyFRP8scheme and the second-order and third-orderSTAR-CCM+ configurations At the beginning of each simulation up to t = 5t ctheflow is dominated by large scale vortical structures, with length scales proportional

to the wavelength of the initial sinusoidal velocity field In Figure 6 at t = 10t cwesee that the flow has undergone turbulent transition and contains a large number

of small scale vortical structures Significant differences are apparent betweenPyFR and the results from the second-order STAR-CCM+ configuration at thistime The PyFR simulation has a much broader range of turbulent scales than the

Trang 15

Table 5 Configuration and results for Taylor-Green vortex simulations.

Degree Elements DOF 1∞ 2∞ 3∞STAR-CCM+ 2ndOrder 2563 2563 1.97E-01 6.41E-01 5.85E-01STAR-CCM+ 3rdOrder 2563 2563 4.27E-02 2.35E-01 1.94E-01

PyFRP1 1283 2563 1.43E-01 4.38E-01 3.53E-01PyFRP2 863 2583 4.17E-02 1.36E-01 1.06E-01PyFRP3 643 2563 3.00E-02 3.80E-02 3.49E-02PyFRP4 523 2603 1.94E-02 3.42E-02 1.61E-02PyFRP5 433 2583 1.99E-02 2.96E-02 1.09E-02PyFRP6 373 2593 1.34E-02 1.93E-02 8.45E-03PyFRP7 323 2563 1.68E-02 1.98E-02 6.18E-03PyFRP8 293 2613 1.60E-02 1.68E-02 5.38E-03

STAR-CCM+ simulation Also, nearly all of the smallest scale structures havebeen dissipated by the second-order STAR-CCM+ configuration In Figure 7 at

t = 15t cwe see that the PyFR simulation has an increasing number of very smallturbulent structures, while the second-order STAR-CCM+ configuration only has

a few intermediate scale structures Finally, by t = 20t cthe turbulent structurespredicted by the second-order STAR-CCM+ configuration have nearly completelydissipated, while PyFR has preserved them even until the end of the simulation.However, we see that increasing the order of accuracy of STAR-CCM+ with thethird-order configuration significantly reduces the amount of numerical dissipation.These third-order results are qualitatively consistent with the high-order PyFRresults, although some over-dissipation of small scale structures is still apparent at

t = 15t c and 20t c

Plots of the temporal evolution of the kinetic energy dissipation rate are shown

in Figure 9 for both STAR-CCM+ simulations and in Figure 10 for the PyFR P1to

P8simulations The second-order STAR-CCM+ configuration is overly dissipative,

over-predicting the kinetic energy dissipation rate up to t c≈ 8 when compared to thespectral DNS results After the peak dissipation rate the second-order STAR-CCM+configuration then under-predicts the kinetic energy dissipation rate up until theend of the simulation This is consistent with our qualitative observations of thetype and size of turbulent structures in the domain The second-order STAR-CCM+configuration quickly dissipates energy from the domain and, as a consequence,little energy is left to be dissipated during the later stages of decay By increasingthe order of accuracy with the third-order configuration of STAR-CCM+ we observe

a significant improvement in the predicted dissipation rate However, there are stillsome inaccuracies, particularly around the time of peak dissipation For PyFR, it isclear that the kinetic energy dissipation rate rapidly approaches the spectral DNSresults with increasing order of accuracy fromP1throughP8 ByP8there is little

Trang 16

difference between the current results and those of the reference spectral simulation,and it is significantly more accurate than either of the STAR-CCM+ simulations.Plots of the temporal evolution of enstrophy are shown in Figure 9 for the STAR-CCM+ simulations and in Figure 11 for the PyFR simulations The second-orderSTAR-CCM+ configuration under-predicts enstrophy throughout the simulation.Since enstrophy gives a measure of dissipation due to physical flow structures, wecan conclude that a significant portion of the dissipation associated with the second-order STAR-CCM+ configuration is numerical We see a significant improvement

in the prediction of the temporal evolution of enstrophy with the third-order figuration of STAR-CCM+ However, there are still significant differences whencompared to the reference spectral DNS data We also observe that the PyFRsimulations rapidly converge to the spectral DNS results with increasing order ofaccuracy ByP8the results are nearly indistinguishable from the reference solution.This demonstrates that the higher-order PyFR simulations can accurately predictthe turbulent structures present during the simulation, and that the majority of theobserved kinetic energy dissipation is physical, rather than numerical, in nature

con-To quantify the relative accuracy and cost of the STAR-CCM+ and variousPyFR simulations we can compare the three proposed error norms1,2, and3

against total resource utilization required for each simulation The error in theobserved dissipation rate is shown in Figure 12 for all of the simulations plottedagainst the resource utilization measured in £×seconds Our first observation is thatall of the PyFR simulations, fromP1throughP8, are cheaper than simulations usingthe second-order STAR-CCM+ configuration In fact, the P1toP3simulationsare nearly an order of magnitude cheaper than the second-order STAR-CCM+configuration The third-order STAR-CCM+ configuration also costs significantlyless than the second-order configuration, since it uses an implicit time-steppingapproach Also, we find that GiMMiK can reduce the cost of the PyFR simulations

by between 20% and 45%, depending on the order of accuracy Interestingly, thecomputational cost of theP1toP3schemes are comparable, demonstrating thatPyFR can produce fourth-order accurate results for the same cost as a second-orderscheme Secondly, we observe that all of the PyFR simulations are more accuratethan the second-order STAR-CCM+ simulations for this, and all other metricsincluding the temporal evolution of enstrophy in Figure 13 and the differencebetween the observed dissipation rate and that predicted from enstrophy as shown

in Figure 14 When compared to the third-order STAR-CCM+ configuration, PyFRresults with similar error levels are less expensive Or, conversely, PyFR simulations

of the same computational cost are up to an order of magnitude more accurate

5.3 Turbulent Flow Over a Circular Cylinder

5.3.1 Background

Flow over a circular cylinder has been the focus of several previous experimentaland numerical studies Its characteristics are known to be highly dependent on the

Trang 17

Figure 5 Isosurfaces of Q-criterion for the Taylor-Green vortex at t = 5t cPyFRP8

(left), STAR-CCM+ second-order (middle), and STAR-CCM+ third-order (right)

Figure 6 Isosurfaces of Q-criterion for the Taylor-Green vortex at t = 10t cPyFRP8

(left), STAR-CCM+ second-order (middle), and STAR-CCM+ third-order (right)

Figure 7 Isosurfaces of Q-criterion for the Taylor-Green vortex at t = 15t cPyFRP8

(left), STAR-CCM+ second-order (middle), and STAR-CCM+ third-order (right)

Figure 8 Isosurfaces of Q-criterion for the Taylor-Green vortex at t = 20t cPyFR

P8(left), STAR-CCM+ second-order configuration (middle), and STAR-CCM+third-order configuration (right)

Trang 18

0 2 4 6 8 10

tc

STAR 2 nd -Order STAR 3 rd -Order Spectral DNS

Figure 9 Dissipation rate (left) and enstrophy (right) from DNS of the Taylor-Greenvortex using STAR-CCM+

Reynolds number Re, defined as

where U is the free-stream velocity, ρ is the fluid density, D is the cylinder diameter,

andμ is the fluid viscosity In the current study we consider flow over a circular

cylinder at Re= 3 900, and an effectively incompressible Mach number of 0.2 Thiscase sits in the shear-layer transition regime identified by Williamson [25] andcontains several complex flow features including separated shear layers, turbulenttransition, and a fully turbulent wake Recently Lehmkuhl et al [26] and Witherden

et al [18] have shown that at this Reynolds number the flow field oscillates at alow frequency between a low energy mode, referred to as Mode-L, and a highenergy mode, referred to as Mode-H Previous studies [27,28,29,30,31,32] hadonly observed one, the other, or some intermediate values between the two in thisReynolds number regime, since their averaging periods were not of sufficient length

to capture such a low frequency phenomena [26] The objective of the current study

is to perform long-period averaging using both PyFR and STAR-CCM+ to comparewith the DNS results of Lehmkuhl et al [26]

We use a computational domain with dimensions [−9D, 25D]; [−9D, 9D]; and

[0,πD] in the stream-wise, cross-wise, and span-wise directions, respectively The

cylinder is centred at (0, 0, 0) The span-wise extent was chosen based on the results

of Norberg [30], who found no significant influence on statistical data when thespan-wise dimension was doubled fromπD to 2πD Indeed, a span of πD has

been used in the majority of previous numerical studies [27,28,29,30], includingthe recent DNS study of Lehmkuhl et al [26] The stream-wise and cross-wisedimensions are also comparable to the experimental and numerical values used byParnaudeau et al [33] and those used for the DNS study of Lehmkuhl et al [26]

Trang 19

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014

Figure 10 Dissipation rate from DNS of the Taylor-Green vortex using PyFR

Trang 20

0 2 4 6 8 10

tc

PyFR P2 Spectral DNS

0 2 4 6 8 10

tc

PyFR P4 Spectral DNS

0 2 4 6 8 10

tc

PyFR P6 Spectral DNS

0 2 4 6 8 10

tc

PyFR P8 Spectral DNS

Ngày đăng: 04/12/2022, 16:01

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Z. J. Wang, K. Fidkowski, R. Abgrall, F. Bassi, D. Caraeni, A. Cary, H. De- coninck, R. Hartmann, K. Hillewaert, H. T. Huynh, N. Kroll, G. May, P. O.Persson, B. van Leer, and M. Visbal. High-order CFD Methods: Current Sta- tus and Perspective. International Journal for Numerical Methods in Fluids , 72(8):811–845, July 2013 Sách, tạp chí
Tiêu đề: High-order CFD Methods: Current Status and Perspective
Tác giả: Z. J. Wang, K. Fidkowski, R. Abgrall, F. Bassi, D. Caraeni, A. Cary, H. Deconinck, R. Hartmann, K. Hillewaert, H. T. Huynh, N. Kroll, G. May, P. O. Persson, B. van Leer, M. Visbal
Nhà XB: International Journal for Numerical Methods in Fluids
Năm: 2013
[2] B. Cockburn and C. W. Shu. TVB Runge-Kutta Local Projection Discon- tinuous Galerkin Finite Element Method for Conservation Laws. II. General Framework. Mathematics of Computation, 52(186):411–435, 1989 Sách, tạp chí
Tiêu đề: TVB Runge-Kutta Local Projection Discontinuous Galerkin Finite Element Method for Conservation Laws. II. General Framework
Tác giả: B. Cockburn, C. W. Shu
Nhà XB: Mathematics of Computation
Năm: 1989
[3] B. Cockburn, S. Hou, and C. W. Shu. The Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws. IV. The multidimensional case. Mathematics of Computation, 54(190):545–581, 1990 Sách, tạp chí
Tiêu đề: Mathematics of Computation
[4] Z. J. Wang. Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids. Basic Formulation: Basic Formulation. Journal of Com- putational Physics, 178(1):210–251, May 2002 Sách, tạp chí
Tiêu đề: Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids. Basic Formulation: Basic Formulation
Tác giả: Z. J. Wang
Nhà XB: Journal of Computational Physics
Năm: 2002
[5] Y. Liu, M. Vinokur, and Z. J. Wang. Discontinuous Spectral Di ff erence Method for Conservation Laws on Unstructured Grids. In Professor Clinton Groth and Professor David W. Zingg, editors, Computational Fluid Dynamics 2004, pages 449–454. Springer Berlin Heidelberg, 2006 Sách, tạp chí
Tiêu đề: Computational Fluid Dynamics 2004
Tác giả: Y. Liu, M. Vinokur, Z. J. Wang
Nhà XB: Springer Berlin Heidelberg
Năm: 2006
[6] D. A. Kopriva. A Conservative Staggered-Grid Chebyshev Multidomain Method for Compressible Flows. II. A Semi-Structured Method. Journal of Computational Physics, 128(2):475–488, October 1996 Sách, tạp chí
Tiêu đề: A Conservative Staggered-Grid Chebyshev Multidomain Method for Compressible Flows. II. A Semi-Structured Method
Tác giả: D. A. Kopriva
Nhà XB: Journal of Computational Physics
Năm: 1996
[7] H. T. Huynh. A Flux Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin Methods. In 18th AIAA Computational Fluid Dynamics Conference, page 42. American Institute of Aeronautics and Astronautics, 2007. AIAA 2007-4079 Sách, tạp chí
Tiêu đề: A Flux Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin Methods
Tác giả: H. T. Huynh
Nhà XB: American Institute of Aeronautics and Astronautics
Năm: 2007
[8] Z. J. Wang and H. Gao. A Unifying Lifting Collocation Penalty Formulation Including the Discontinuous Galerkin, Spectral Volume / Di ff erence Methods for Conservation Laws on Mixed Grids. Journal of Computational Physics , 228(21):8161–8186, November 2009 Sách, tạp chí
Tiêu đề: Journal of Computational Physics
[9] P. E. Vincent, P. Castonguay, and A. Jameson. A New Class of High-Order Energy Stable Flux Reconstruction Schemes. Journal of Scientific Computing , 47(1):50–72, September 2010 Sách, tạp chí
Tiêu đề: A New Class of High-Order Energy Stable Flux Reconstruction Schemes
Tác giả: P. E. Vincent, P. Castonguay, A. Jameson
Nhà XB: Journal of Scientific Computing
Năm: 2010
[10] T. Haga, H. Gao, and Z. J. Wang. A High-Order Unifying Discontinuous For- mulation for the Navier-Stokes Equations on 3d Mixed Grids. Mathematical Modelling of Natural Phenomena , 6(03):28–56, January 2011 Sách, tạp chí
Tiêu đề: A High-Order Unifying Discontinuous Formulation for the Navier-Stokes Equations on 3d Mixed Grids
Tác giả: T. Haga, H. Gao, Z. J. Wang
Nhà XB: Mathematical Modelling of Natural Phenomena
Năm: 2011
[11] D. M. Williams and A. Jameson. Energy stable flux reconstruction schemes for advection–di ff usion problems on tetrahedra. Journal of Scientific Computing , 59(3):721–759, 2014 Sách, tạp chí
Tiêu đề: Energy stable flux reconstruction schemes for advection–diffusion problems on tetrahedra
Tác giả: D. M. Williams, A. Jameson
Nhà XB: Journal of Scientific Computing
Năm: 2014
[12] B. C. Vermeire, J. S. Cagnone, and S. Nadarajah. ILES Using the Correction Procedure via Reconstruction Scheme. In 51st AIAA Aerospace Sciences Meet- ing including the New Horizons Forum and Aerospace Exposition , page 18, Grapevine, TX, 2013. American Institute of Aeronautics and Astronautics.AIAA 2013-1001 Sách, tạp chí
Tiêu đề: ILES Using the Correction Procedure via Reconstruction Scheme
Tác giả: B. C. Vermeire, J. S. Cagnone, S. Nadarajah
Nhà XB: American Institute of Aeronautics and Astronautics
Năm: 2013
[13] B. C. Vermeire, S. Nadarajah, and Paul G. Tucker. Canonical Test Cases for High-Order Unstructured Implicit Large Eddy Simulation. In 52nd Aerospace Sciences Meeting, page 23, National Harbor, ML, 2014. American Institute of Aeronautics and Astronautics. AIAA 2014-0935 Sách, tạp chí
Tiêu đề: Canonical Test Cases for High-Order Unstructured Implicit Large Eddy Simulation
Tác giả: B. C. Vermeire, S. Nadarajah, Paul G. Tucker
Nhà XB: American Institute of Aeronautics and Astronautics
Năm: 2014
[14] B. C. Vermeire, S. Nadarajah, and P. G. Tucker. Implicit large eddy simu- lation using the high-order correction procedure via reconstruction scheme.International Journal for Numerical Methods in Fluids, 82(5):231–260, 2016 Sách, tạp chí
Tiêu đề: Implicit large eddy simulation using the high-order correction procedure via reconstruction scheme
Tác giả: B. C. Vermeire, S. Nadarajah, P. G. Tucker
Nhà XB: International Journal for Numerical Methods in Fluids
Năm: 2016
[15] F. D. Witherden, A. M. Farrington, and P. E. Vincent. PyFR: An Open Source Framework for Solving Advection-Di ff usion Type Problems on Streaming Architectures using the Flux Reconstruction Approach. Computer Physics Communications , 185(11):3028–3040, November 2014 Sách, tạp chí
Tiêu đề: PyFR: An Open Source Framework for Solving Advection-Di ff usion Type Problems on Streaming Architectures using the Flux Reconstruction Approach
Tác giả: F. D. Witherden, A. M. Farrington, P. E. Vincent
Nhà XB: Computer Physics Communications
Năm: 2014
[17] J. Slotnick, A. Khodadoust, J. Alonso, D. Darmofal, W. Gropp, E. Lurie, and D. Mavriplis. CFD vision 2030 study: A path to revolutionary computational aerosciences. Technical Report NASA / CR-2014-218178, NF1676L-18332, National Aeronautics and Space Administration, 2014 Sách, tạp chí
Tiêu đề: CFD vision 2030 study: A path to revolutionary computational aerosciences
Tác giả: J. Slotnick, A. Khodadoust, J. Alonso, D. Darmofal, W. Gropp, E. Lurie, D. Mavriplis
Nhà XB: National Aeronautics and Space Administration
Năm: 2014
[18] F.D. Witherden, B.C. Vermeire, and P.E. Vincent. Heterogeneous computing on mixed unstructured grids with pyfr. Computers Fluids, 120:173 – 186, 2015 Sách, tạp chí
Tiêu đề: Heterogeneous computing on mixed unstructured grids with pyfr
Tác giả: F.D. Witherden, B.C. Vermeire, P.E. Vincent
Nhà XB: Computers Fluids
Năm: 2015
[19] B. D. Wozniak, F. D. Witherden, F. P. Russell, P. E. Vincent, and P. H. J. Kelly.Gimmik—generating bespoke matrix multiplication kernels for accelerators:Application to high-order computational fluid dynamics. Computer Physics Communications , 202:12 – 22, 2016 Sách, tạp chí
Tiêu đề: Gimmik—generating bespoke matrix multiplication kernels for accelerators: Application to high-order computational fluid dynamics
Tác giả: B. D. Wozniak, F. D. Witherden, F. P. Russell, P. E. Vincent, P. H. J. Kelly
Nhà XB: Computer Physics Communications
Năm: 2016
[21] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II, vol- ume 14 of Springer Series in Computational Mathematics. Springer Berlin Heidelberg, Berlin, Heidelberg, 1996 Sách, tạp chí
Tiêu đề: Solving Ordinary Di"ff"erential Equations II", vol-ume 14 of"Springer Series in Computational Mathematics
[22] C. A. Kennedy, M. H. Carpenter, and R. M. Lewis. Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations. Applied Numerical Mathematics , 35(3):177–219, November 2000 Sách, tạp chí
Tiêu đề: Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations
Tác giả: C. A. Kennedy, M. H. Carpenter, R. M. Lewis
Nhà XB: Applied Numerical Mathematics
Năm: 2000

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm