(BQ) Part 1 book ASCI alliance center for simulation of dynamic response in materials FY 2000 annual report has contents: Introduction and overview, integrated simulation capability, high explosives.
Trang 1CALTECH ASCI TECHNICAL REPORT 076
caltechASCI/2000.076
ASCI Alliance Center for Simulation of Dynamic Response in Materials
California Institute of Technology
FY 2000 Annual Report
Michael Aivazis, Bill Goddard, Dan Meiron, Michael Ortiz,
James C.T Pool, Joe Shepherd, Principal Investigators
Trang 2ASCI Alliance Center
for Simulation of Dynamic Response in Materials California Insitute of
Technology
FY 2000 Annual Report
Michael Aivazis, Bill Goddard, Dan Meiron, Michael Ortiz, James
C T Pool, Joe Shepherd
Principal Investigators
Trang 31 Introduction and Overview 1
1.1 Introduction 1
1.2 Administration of the Center 2
1.3 Overview of the integrated simulation capability 3
1.4 Highlights of Research Accomplishments 4
2 Integrated Simulation Capability 8 2.1 Introduction 8
2.2 Algorithmic integration 8
2.3 Fluid dynamics algorithms 14
2.4 Solid Mechanics algorithms 20
2.5 Software integration 22
3 High Explosives 28 3.1 Overview of FY00 Accomplishments 28
3.2 Personnel 29
3.3 Material Properties and Chemical Reactions 29
3.4 Engineering Models of Explosives 30
3.5 Eulerian-Lagrangian Coupling Algorithms 31
3.6 Reduced Reaction Modeling 33
4 Solid Dynamics 48 4.1 Overview of FY 00 Accomplishments 48
4.2 Personnel 48
4.3 Nanomechanics 49
4.4 Mesomechanics 52
4.5 Macromechanics 54
4.6 Polymorphic Phase Transitions 59
4.7 Eulerian Elastic-Plastic Solver 60
4.8 FY 01 objectives 64
5 Materials Properties 71 5.1 Overview of FY 00 Accomplishments 71
5.2 Personnel 72
i
Trang 4CONTENTS ii
5.3 Materials properties for high explosives 73
5.4 Materials Properties for Solid Dynamics 76
5.5 Materials properties methodology development 80
6 Compressible Turbulence 82 6.1 Introduction 82
6.2 Overview of FY 00 Accomplishments 83
6.3 Personnel 84
6.4 Pseudo-DNS of Richtmyer-Meshkov instability 85
6.5 LES of Richtmyer-Meshkov instability 87
6.6 Implementation of CFD Euler solvers within GrACE 90
6.7 DNS of Rayleigh-Taylor instabilities 94
6.8 FY 01 objectives 99
7 Computational Science 101 7.1 Overview of FY 2000 Accomplishments 101
7.2 Personnel 101
7.3 Scalability 102
7.4 Visualization 104
7.5 Scalable I/O 106
7.6 Algorithms 108
Trang 5Introduction and Overview
This annual report describes research accomplishments for FY 00 of the Center forSimulation of Dynamic Response of Materials The Center is constructing a virtualshock physics facility in which the full three dimensional response of a variety of tar-get materials can be computed for a wide range of compressive, tensional, and shearloadings, including those produced by detonation of energetic materials The goalsare to facilitate computation of a variety of experiments in which strong shock anddetonation waves are made to impinge on targets consisting of various combinations
of materials, compute the subsequent dynamic response of the target materials, andvalidate these computations against experimental data
An illustration of the simulations that are to be facilitated by the Center’s VirtualTest Facility (VTF) are shown in Figure1.1The research is centered on the three pri-mary stages required to conduct a virtual experiment in this facility: detonation of highexplosives, interaction of shock waves with materials, and shock-induced compress-ible turbulence and mixing The modeling requirements are addressed through £veintegrated research initiatives which form the basis of the simulation development roadmap to guide the key disciplinary activities:
1 Modeling and simulation of fundamental processes in detonation,
2 Modeling dynamic response of solids,
3 First principles computation of materials properties,
4 Compressible turbulence and mixing, and
5 Computational and computer science infrastructure
1
Trang 6CHAPTER 1 INTRODUCTION AND OVERVIEW 2
Figure 1.1: Illustrations of three key simulations performed using the Virtual Test cility Top left: High velocity impact generated by the interaction of a detonation wavewith a set of solid test materials Top right: High velocity impact generated by a ¤yerplate driven by a high explosive plane wave lens Bottom: con£guration used to exam-ine compressible turbulent mixing
1.2.1 Personnel Overview
The center activities are guided by £ve principal investigators:
J Shepherd High Explosives
M Ortiz Solid Dynamics
W A Goddard Materials Properties
D Meiron Compressible Turbulence
J C T Pool Computational Science
M Aivazis Computational Science and Software Integration
In FY 00 the center personnel numbered as follows:
• 16 Caltech faculty (including the center steering committee)
• 12 external faculty af£liated with the center via sub-contracts
• 18 Caltech graduate students
• 24 research staff and postdoctoral scholars
• 10 administrative staff (primarily part-time support from the Caltech Center forAdvanced Computing Research (CACR))
Trang 7Figure 1.2: Diagrammatic representation of the VTF software architecture.
Detailed personnel listings are provided in the beginning of each chapter detailingthe activities of each disciplinary effort within the center
1.2.2 Sub-contracts
In addition to participants based at Caltech the Center has associated with it severalsub-contractors who are providing additional support in a few key areas In the tablebelow we list the contractors, their institutional af£liation and their area of research:
R Phillips Brown University Quasi-continuum methods for plasticity
R Cohen Carnegie Institute of
Washington
High pressure equation of state of metals
G Miller U C Davis Multi-phase Riemann solvers
R Ward Univ Tennessee Large scale eigenvalue algorithms
C Kesselman Univ So California,
ISI
Metacomputing, Globus
D Reed Univ Illinois Scalable I/O
M Parashar Rutgers University Parallel AMR
1.3.1 VTF software architecture
The VTF software architecture is illustrated in Figure1.2 The top layer is a ing interface written in the Python scripting language which sets up all aspects of thesimulation and coordinates the interaction of the simulation with the operating systemand platform Also associated with the scripting environment is a materials propertiesdatabase The database provides information to the solvers regarding equation of state,
Trang 8script-CHAPTER 1 INTRODUCTION AND OVERVIEW 4
reaction rates, etc
The next layer consists of the VTF computational engines These engines are aged as shared objects for which Python bindings are then generated At present theVTF architecture supports two such engines, a 3-D parallel Eulerian CFD solver which
pack-is used for simulations of high explosive detonation and simulations of compressibleturbulent mixing, and a 3-D Lagrangian solid dynamics solver The solid dynamicssolver is now fully parallel as of this writing
At the next layer we have designated some of the lower level functionality of theengines For example, the CFD solver ultimately will have the ability to perform 3-Dsimulations using patch based parallel AMR Similarly the solid dynamics solver willultimately also possess a capability to perform parallel adaptive meshing
Finally at the lowest level are services used to facilitate various low level aspects ofthe simulations such as the ability to access distributed resources via meta-computinginfrastructures such as Globus, and facilities for parallel communication and scalabledisposition of the large date sets generated during the computation
The philosophy of this software architecture is to enable a multi-pronged approach
to the simulation of high velocity impact problems and the associated ¤uid-solid action For example it is well known that such simulations can be performed using apurely Lagrangian approach, a purely Eulerian approach, or some mixture of the two(such as ALE) The objective of the VTF architecture is to provide a ¤exible envi-ronment in which such simulations can be performed and the results of the differingapproaches can be assessed
inter-As of the end of FY00 we have completed a fully three dimensional coupled ulation of a detonation interacting with a Tantalum target The simulation was run onall three ASCI platforms The present simulation utilizes a parallel ¤uid mechanicssolver and a fully parallel solid mechanics solver In addition a full implementation
sim-of the Python based problem solving environment has been completed Details sim-of theimplementation can be found in Chapter2
1.4.1 High Explosives
Material properties and chemical reactions The detailed reaction mechanism andrate constants were completed for HMX (C4H8N8O8, cyclotetramethylene-tetra-nitramine) and RDX (C3H6N6O6, cyclotrimethylenetrinitramine) gas phase de-composition This was a continuation of work begun in previous years Quantummechanical computations were used to compute potential energy surfaces andthermal rate constants Molecular dynamics was used to examine the transfer ofmechanical to thermal energy immediately behind a shock wave A new method
of implementing reactive force £elds was developed and applied to RDX tions initiated by a shock front in a crystal lattice at £nite temperature The Intrin-sic Low Dimensional Manifold (ILDM) method was used to compute a reducedreaction mechanism for hydrogen-oxygen-nitrogen combustion The ILDM wasimplemented in a two-dimensional, Adaptive Mesh Re£nement (AMR) solutionfor a propagating detonation
Trang 9reac-Engineering models of explosives Improved models of the equation of state for aPlastic-Bonded Explosive (PBX) were developed and implemented in the Vir-tual Test Facility (VTF) The structure of the Zel’dovich-von Neumann D ¨oring(ZND) solution for the Johnson-Tang-Forest (JTF) model of PBX detonation wascomputed.
Eulerian-Lagrangian Coupling Algorithms The ghost ¤uid Eulerian-Lagrangian(GEL) coupling algorithm was implemented using the Grid Adaptive Computa-tional Engine (GrACE) library and used for parallel AMR simulations of shockand detonation wave propagation in yielding con£nement simulated with a lin-ear elastic £nite-element model A two-dimensional test problem with an exactnumerical solution was developed and used to evaluate GEL schemes
1.4.2 Solid Dynamics
Solid mechanics engine Accomplishments during FY 00 include the serial mentation of adaptive mesh re£nement (subdivision) and coarsening (edge col-lapse); and the fully parallel implementation of the solid dynamics engine withinthe VTF3D (without mesh adaption)
imple-Nanoscale At the nanoscale we have continued to carry out quasi-continuum ulations of nanoindentation in gold; and mixed continuum/atomistic studies ofanisotropic dislocation line energies and vacancy diffusivities in stressed lattices.Microscale At the microscale we have developed a phase £eld model of crystallo-graphic slip and the forest hardening mechanism; we have re£ned our meso-scopic model of Ta by investigating the strengths of jogs resulting from disloca-tion intersections, the dynamics of dislocation-pair annihilation, and by import-ing a variety of fundamental constants computed the MP group
sim-Macroscale At the macroscale we have focused on various enhancements of our gineering material models including the implementation and veri£cation of a La-grangian arti£cial viscosity scheme for shock capturing in the presence of £nitedeformations and strength; the implementation of an equation of state and elasticmoduli for Ta computed from £rst principles by Ron Cohen (MP group); and theimplementation of the Steinberg-Guinan model for the pressure dependence ofstrength
en-1.4.3 Materials Properties
High Explosives In simulations supporting high explosives, the MP team has pleted the decomposition mechanism of RDX and HMX molecules using den-sity functional theory, obtained a uni£ed decomposition scheme for key ener-getic materials, obtained a detailed reaction network of 450 reactions describingnitramines, developed ReaxFF, a £rst-principles based bond-order dependent re-active force£eld for nitramines, and pursued MD simulations of nitramines undershock loading conditions
Trang 10com-CHAPTER 1 INTRODUCTION AND OVERVIEW 6
Solid dynamics In simulations supporting solid dynamics, the MP team has oped a £rst-principles qEAM force-£eld for Ta We have used this force £eld tosimulate the melting curve of Ta in shock simulations up to 300 GPa We havealso investigated properties related to single-crystal plasticity, particularly coreenergies for screw and edge dislocations, Peierls energies for dislocation migra-tion, and kink nucleation energies We have simulated vacancy formation andmigration energies, related to vacancy aggregation and spall failure We haverun high-velocity impact MD simulations to investigate spall failure in materi-als We have simulated a thermal equation of state for Ta from density functionaltheory calculations, and have simulated the elasticity of Ta versus P to 400 GPa,and T to 10000 K Finally, we have begun work on Fe by examining the hcpphases of Fe
devel-Methodology In methodological developments and software integration, we have veloped the MPI-MD program, which allows parallel computations of materialswith millions of atoms on hundreds of processors We have developed an al-gorithm for the quantum mechanical eigenproblem that uses a block-tridiagonalrepresentation of a matrix to yield more ef£cient scaling of the eigensolver Wehave developed a variational quantum Monte Carlo program to yield more accu-rate simulations of metals at high temperature and pressure
de-Materials Properties Database Finally, we have begun work on the materials erties database, to allow archival of QM and MD simulations, and automaticgeneration of the derived properties required by the HE and SD efforts
prop-1.4.4 Compressible Turbulence
Pseudo DNS Simulations of 3-D R-M instability This work is ongoing We have
to date developed a simulation capability using the WENO scheme and have formed a simulation of R-M instability with reshock LES modeling was alsoincluded A key issue is the overall dissipative nature of the advection schemewhich can contaminate the small scale behavior seen by the LES model
per-Sub-grid modeling for LES of compressible turbulence We have to date mented the LES model of Pullin along with the use of high order advectionschemes such as WENO At present no further development of the sub-gridmodel has been contemplated since the main issues that need to be overcome
imple-is the proper interplay of high order advection schemes with turbulence ing
model-Development of 3-D AMR solver This work is ongoing We have successfully veloped a 3-D solver for compressible ¤ow utilizing adaptive mesh re£nementunder the GrACE computational framework We have begun the investigation ofRichtmyer-Meshkov instability with reshock using the AMR capability
de-High resolution 3-D DNS of R-M and R-T flows We have developed and ined two parallel codes, one a fully compressible multi-species DNS code withfull physical viscosity utilizing Pad´e-base methods and the other a high order
Trang 11exam-incompressible spectral element solver Both codes have been implemented onthe ASCI platforms This work is ongoing.
1.4.5 Computational Science
Pyre During FY 2000 we made signi£cant progress towards the full implementation
of our problem solving environment Details of this progress are reported inSection2.5
Scalability We have conducted extensive studies of the scalability properties of thecodes that were used to achieve our goals for FY 99 These studies are discussed
in detail in Section7.3
Visualization The primary focus of our visualization activities was the construction
of custom modules for the IRIS Explorer visualization environment in order tosupport the current needs of the Center In addition, we have identi£ed a smallset of candidate visualization engines for integration into Pyre This effort isdiscussed in detail in Section7.4
Distributed computing We completed an investigation of the Globus facilities essary for the various aspect of remote staging and remote data access Prototypemodules that employ them have been constructed and an effort is well underwayfor a complete integration of the relevant Globus facilities in Pyre
nec-Scalable I/O We have performed performance studies of the various layers of theScalable I/O infrastructure that were made available to us during this year Detailcan be found in Section7.5
Trang 12• Developed and implemented a fully parallel version of adlib, the center’s solidsolver.
• Developed and implemented a shock physics capability for the solid solver
• Further developed the ¤uid solver RM3d so that it can now deal with generalequations of state This is essential for HE simulation
• Integrated an improved EOS for Tantalum with parameters provided by the terials Properties group
Ma-• Performed a full integrated simulation of a detonation interacting with a tantalumtarget in the VTF on the ASCI platforms
In the sections below we provide details on the items listed above
2.2.1 An overview of the fully integrated fluid-solid algorithm
The algorithm for ¤uid-solid coupling used for coupled simulations in the VTF isshown graphically in Fig.2.1 The algorithm itself is of splitting type in which each
8
Trang 13solver, ¤uid and solid, transfer information relevant to the coupling, coordinate a timestep and then proceed to perform their respective tasks independently Note that eachstep is performed by the respective solver in their own process space Communica-tion between process spaces is performed using a server process for the ¤uid and solidwhich then communicates to a set of clients which carry out the solution task Thus thealgorithms can in fact run on differing architectures or across the grid The algorithm
¤uid-Pressure update The ¤uid solver uses interpolation via the level set to compute theboundary pressures which are then sent to the solid server to provide the loading
2.2.2 The Closest Point and Distance Transform
This section presents a new algorithm for computing the closest point transform to a
manifold on a rectilinear grid in low dimensional spaces The closest point transform
£nds the closest point on a manifold and the Euclidean distance to a manifold for allthe points in a grid, (or the grid points within a speci£ed distance of the manifold)
We consider manifolds composed of simple geometric shapes, such as, a set of points,piecewise linear curves or triangle meshes The algorithm computes the closest point
on and distance to the manifold by solving the Eikonal equation|∇u| = 1 by themethod of characteristics The method of characteristics is implemented ef£cientlywith the aid of computational geometry and polygon/polyhedron scan conversion Thecomputed distance is accurate to within machine precision The computational com-plexity of the algorithm is linear in both the number of grid points and the complexity
of the manifold Thus it has optimal computational complexity The algorithm wasimplemented for triangle meshes in 3D For details on the implementation and perfor-mance of the algorithm, see section7.3
Trang 14CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 10
Figure 2.1: Flowchart listing the steps required for each solver to implement the solid algorithm utilizing the level set capability
¤uid-The distance transform is the value of the distance for the points in a grid thatsurrounds the surface in question It transforms an explicit representation of a manifoldinto an implicit one The manifold is implicitly represented as the level set of distancezero The closest point transform is the value of the closest point on the manifold forthe points in the grid
The distance and closest point transforms are important in several applications Thedistance transform can be used to convert an explicit surface into a level set represen-tation of the surface Algorithms for working with the level set are often simpler andmore robust than dealing with the surface directly The closest point transform is use-ful when one needs information about the closest point on a surface in addition to thedistance Each point on a surface has a position and may have an associated velocity,color, or other data
We have used the closest point transform to explicitly track the location of thesolid-¤uid interface in the VTF coupled solid mechanics / ¤uid mechanics computa-tions [64] Using a closest point transform, the Lagrangian solid mechanics code cancommunicate the position and velocity of the solid interface to an Eulerian ¤uid me-chanics code The ¤uid grid spans the entire domain, inside and outside the solid Thusonly a portion of the grid points lie in the ¤uid The solid mechanics is done on atetrahedral mesh The boundary of the solid is a triangle mesh surface Computingthe distance transform to this surface on the ¤uid mechanics grid indicates which gridpoints are outside the solid and thus in the ¤uid domain Through the closest pointtransform one can implement boundary conditions for the ¤uid at the solid boundary.Because the solid / ¤uid interface is time dependent, it is necessary to recreate the
Trang 15closest point transform at each time step It is highly desirable that the closest pointtransform have linear computational complexity in both the size of the ¤uid grid andsolid mesh If the closest point transform (CPT) does not have linear computationalcomplexity, determining the ¤uid boundary condition through the CPT would likelydominate the computation.
Previous Work
There has been previous work on the closest point transform and the distance transform,but these methods are not well suited to computing the CPT for the ¤uid/solid interface.First consider the brute force approach The closest point transform to a manifold may
be computed directly by iterating over the geometric primitives in the manifold as oneiterates over the grid points If there areM geometric primitives in the manifold and
N grid points, the computational complexity of the brute force algorithm is O(M N ).This calculation would dominate the simulation
Next consider £nite difference methods One can use upwind £nite differencemethods to solve the Eikonal equation and obtain an approximate distance transform[78] The initial data is the value of the distance on the grid points surrounding the sur-face This initial condition can be generated with the brute force method An upwind
£nite difference scheme is then used to propagate the distance to the rest of the gridpoints The scheme may be solved iteratively, yielding a computational complexity ofO(αN ), where α is the number of iterations required for convergence The schememay also be solved by ordering the grid points so that information is always propa-gated in the direction of increasing distance This is Sethian’s fast marching method[98] The computational complexity isO(N log N ) Finite difference methods arenot well suited for computing the CPT to the ¤uid/solid interface because they need
an initial condition speci£ed on the ¤uid grid Also these methods compute only thedistance The closest point would have to be computed separately
Finally, consider LUB-Tree Methods One can also use lower-upper-bound treemethods to compute the distance and closest point transforms [54] The surface isstored in a tree data structure in which each subtree can return upper and lower bounds
on the distance to any given point This is accomplished by constructing boundingboxes around each subtree For each grid point, the tree is searched to £nd the closestpoint on the surface As the search progresses, the tree is pruned by using upper andlower bounds on the distance Since the average computational complexity of eachsearch isO(log M ), the overall complexity is O(N log M ) LUB-Tree methods arenot well suited for the ¤uid/solid interface problem because they do not automaticallycompute the CPT only within a certain distance of the manifold In order to use LUB-Tree method, one would £rst need to determine which grid points are close to themanifold
The CPT Algorithm
In this section we develop an improved closest point transform algorithm for computingthe closest point to a manifold for the points in a regular grid As a £rst step in thealgorithm, we need something like a Voronoi diagram for the manifold Instead ofcomputing polyhedra that exactly contain the closest grid points to a point, we will
Trang 16CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 12
compute polyhedra that at least contain the closest grid points to the components of themanifold These polyhedra can then be scan converted to determine the grid points thatare possibly closest to a given component
We consider the closest point transform for a triangle mesh surface in 3D For agiven grid point, the closest point on the triangle mesh either lies on one of the trianglefaces, edges or vertices We £nd polyhedra which contain the grid points which arepossibly closest to the faces, edges or vertices Suppose that the closest point on thesurface ξ to a grid point x lies on a triangular face The vector from ξ to x is orthogonal
to the face Thus the closest points to a given face must lie within a triangular prismde£ned by the face and the normal vectors at its three vertices The prism de£ned by theface and the outward/inward normals contains the points of positive/negative distancefrom the face See Figure2.2a for the face polyhedra of an icosahedron
Consider a grid point x whose closest point on the surface ξ is on an edge Eachedge in the mesh is shared by two faces The closest points to an edge must lie in
a cylindrical wedge de£ned by the line segment and the normals to the two adjacentfaces If the outside/inside angle between the two adjacent faces is less thanπ, thenthere are no points of positive/negative distance from the line segment See Figure2.2bfor the edge polyhedra of an icosahedron Figure2.2c shows a single edge polyhedron.Finally consider a grid point x whose closest point on the surface ξ is on a vertex.Each vertex in the mesh is shared by three or more faces The closest points to avertex must lie in a cone de£ned by the normals to the adjacent faces If the mesh isconvex/concave at the vertex then there will only be a cone outside/inside the mesh andonly points of positive/negative distance If the mesh is neither convex nor concave atthe vertex there are neither positive nor negative cones Figure2.2d shows the vertexpolyhedra of an icosahedron
We present a fast algorithm for computing the distance and closest point transform
to a triangle mesh surface LetF be the set of faces, E be the set of edges and V the set
of vertices Letdijkandcpijkdenote the distance to the surface and the closest point
on the surface for the points in a 3D grid
// Loop over the scan converted points.
for each(i, j, k) ∈ G:
Trang 18rec-CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 14
closest point and distance computations for the grid points TheO(M ) term representsthe construction of the polyhedra
In this section, we describe the CFD engine RM3d in the Virtual Test Facility (VTF).
The code operates in two and three dimensional Cartesian, and axisymmetric tries The time stepping is second order Runge-Kutta The ¤uxes at the cell interfacesmay be calculated either by the Equilibrium ¤ux method (EFM) (Kinetic ¤ux vectorsplitting scheme) [86], or the Godunov [41] or Roe method [40] (Flux difference split-ting schemes) Second order accuracy is achieved via linear reconstruction with VanLeer type slope limiting [114] applied to projections in characteristic state space Thecode is ¤exible enough to allow for multi-species using level-sets (ζ) and a volume-of-
geome-¤uid approach The two dimensional version of this CFD engine has been validatedwith shock-contact discontinuity experiments of Sturtevant and Haas [108] Further-more, the two dimensional version of the CFD engine has been used successfully forRichtmyer-Meshkov instability investigations [96,97] The code is implemented inparallel using the MPI message passing library on a variety of platforms including In-tel PCs, IBM SP2 (ASCI Blue Paci£c at LLNL and SDSC blue horizon), SGI Origin
2000 (Nirvana at LANL), Intel Paragon (ASCI Red at Sandia), and Beowulf clusters.The scalability of both the core ¤uid solvers and the coupling algorithms was good andquanti£cations of scalability are shown in the chapter on computational science.The major enhancements to the CFD engine during FY00 were the following:
1 The development of a general equation of state solver
2 Extension of the ¤uid-solid coupling algorithm to three dimensions
3 Implementation of a version of the solver within the adaptive mesh re£nementframework, GrACE
The last item above is described in the chapter on compressible turbulence The othertwo items are discussed in detail below
2.3.1 RM3d: Parallel CFD Engine
RM3d is a CFD code which solves the Euler equations, written below in strong
conser-vation form, for inviscid compressible ¤ow:
Ut+ Fx(U) + Gy(U) + Hz(U) = S(U), (2.1)where
F(U) = {ρu, ρu2+ p, ρuv, ρuw, (E + p)u, ρλu}T
G(U) = {ρv, ρuv, ρv2+ p, ρvw, (E + p)v, ρλv}T
H(U) = {ρw, ρuw, ρvw, ρw2+ p, (E + p)w, ρλw}T (2.3)The above equations are closed by a general equation of state (EoS) expressed func-tionally as:χ(p, e, ρ, λ) = 0 As of FY99 RM3d capabilities were limited to a perfect
Trang 19gas EoS and the ¤uxes were computed with either the Godunov or the EquilibriumFlux Method This year a solver was developed which made no assumption about theEoS; thermodynamically relevant quantities such as the sound speed were computed
by linking to a EoS package The source term in the species continuity equation iscomputed by linking with a chemistry package Presently, the EoS packages used arethe JTF (Johnson-Tang-Forest) EoS, a Mie-Gruniesen EoS with an ad-hoc heat releaseterm, and a perfect gas EoS Given below is the method implemented for the generalEoS solver
RM3d: General EoS Solver
We extended the work of Glaister [40] to multi-species Given left and right states(UL, UR) at a cell interfacei + 1/2 between cells i and i + 1, the ¤ux in Roe method is
Trang 20CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 16The slope in celli is computed as follows
µ ∂V
∂ξ
¶
i
= [L]−1i minmod( ˜Vi, ˜Vi+1, ˜Vi−1) (2.9)
where ˜Vi+k = [L]iVi+k, k = −1, 0, 1 is the projection of V on to the characteristicspace, and theminmod function provides the slope limiting The matrix of left eigen-vectors of the Jacobian∂F /∂V , is [L] given by
2 −pe 2c 0 0 −pλ
assumption in the development of the algorithm is that the coupling is explicit The
two solvers exchange information at the beginning of every time step A key idea ofthis coupling algorithm is that the zero mass ¤ux boundary condition for the Eulerequations at the interface between the ¤uid and the solid be strictly enforced Thetraction boundary condition is applied via imposition of the ¤uid pressure forces onthe Lagrangian boundary of the solid The Eulerian ¤uid domain,Ω, is decomposed asfollows:
Ω = {Ωlmn|l = 1 · · · L, n = 1 · · · N, m = 1 · · · M }
Ωlmn= {[xi,j,k, xi+1,j,k] × [yi,j,k, yi,j+1,k] × [zi,j,k, zi,j,k+1]},
i = 1 · · · Il, j = 1 · · · Jm, k = 1 · · · Kn}, (2.11)where the ¤uid subdomainsΩlmnreside on a logical Cartesian mesh ofL, M, N pro-cessors along thex, y, z directions, respectively In the coupling, it is only the theLagrangian boundary between the ¤uid and the solid which is of concern and not theentire Lagrangian domain of the solid solver The Lagrangian boundaryδΩ is broadcast
to all processors
δΩ = {Sp|p = 1 · · · P }, Sp= {∆q}, q = 1 · · · Qp, (2.12)i.e., the Lagrangian boundary comprises ofP triangulated surfaces, Sp The pressure,velocity and position of the nodes on the Lagrangian boundary are exchanged Giventhe Lagrangian boundary, a signed distance level-set function is computed using the
Trang 21demonstrably optimal Closest Point Transform (CPT) algorithm (See chapter on putational science).
com-φ(¯xi,j,k, t) = Cs min[d(¯xi,j,k, Sp), p = 1 · · · P ] (2.13)
Cs= +1 (Solid) Cs= −1 (F luid) (2.14)Note that the level setφ(¯xi,j,k, t) = 0 de£nes δΩ Thermodynamic variables are thenextrapolated by advection in pseudo-time We de£neψ ≡ (p, ρ) if the perfect gas EoS
is used, orψ ≡ (e, ρ, λ1, , λn) for the general EoS solver Then,
ˆ
n(¯xi,j,k, t) = ∇φ
whereˆn is the normal to the level set φ The above extrapolation is solved in a band
of ghost cells, i.e.,0 < φ(¯xi,j,k, t) < Φ Typically we choose Φ to be four to £vetimes the mesh spacing It is clear that nearest neighbor communication between theprocessors is required at end of each pseudo-time step in the above extrapolation byadvection In the same band of ghost cell we reconstruct the velocity £eld to enforcethe zero mass ¤ux boundary condition by extrapolation and re¤ection of the normalvelocity component in a local frame attached toδΩ Let ψ ≡ (¯u.ˆi, ¯u.ˆj, ¯u.ˆk) Then,
re-as well re-as simple coupled simulations where the solid solver wre-as replaced by gate solid solvers (e.g spring-mass systems) Some of these simple coupled cases hadanalytical solutions which were used to verify the coupled simulations Special carewas taken to ensure that results on different computing platforms and different number
surro-of processors were the same to within machine precision Several surro-of these results areomitted in the interest of brevity We present mainly results from the new General EoSsolver and fully coupled ¤uid-solid simulations in this section
Example: General EoS Solver
Shown in Fig.2.3are examples of an initial one dimensional ZND detonation tion over a right angle corner and a complicated contour (which happens to be a silhou-ette of a pig), respectively The results of the corner turning problem were also veri£edagainst a £rst order AMR research code (see chapter on high explosives) The secondexample demonstrates the versatility of the current level-set approach For both cases
Trang 22diffrac-CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 18
Example: Riemann Problem in a Deformable Cylinder
A standard gas dynamics test is a Riemann problem wherein a virtual membrane rates gases at high pressure and density from low pressure and density Upon rupture,the solution of the Riemann problem comprises of two nonlinear waves (either shocks
Trang 23sepa-(a) (b)
Figure 2.4: Riemann problem in a deformable cylinder Shown is the pressure on theLagrangian boundary The initial condition is shown on the top while the bottom oneshows the pressure after several re¤ections of the waves from the bottom wall
or rarefactions) and a linear degenerate wave (the contact discontinuity) In Fig.2.4,the pressure on the boundary between the Eulerian and Lagrangian domains is shown
at the initial and £nal times in the simulation Clearly, the boundary between the twodomains has signi£cantly deformed Furthermore, it is evident that the solution is nolonger one dimensional
Example: Detonation in a Tantalum Cylinder
This simulation perhaps captures all the features of the VTF The ¤uid mechanics wasperformed on 1000 processors with a483 mesh on each processor and an effectiveresolution4803 The solid domain comprised of roughly60000 tetrahedral elementsand was performed on its space of 24 processors Thus the coupled simulation was on
1024 processors at LLNL Blue Paci£c (IBM-SP2) The ¤uid domain was initializedwith a one dimensional ZND pro£le which propagated from left to right in a tantalumcylindrical shell with a tantalum target at the right end The EoS package employedhere was Mie-Gruniesen with an ad-hoc heat release term We simply mention here thatthe solid mechanics solver allowed for plasticity and shock propagation in the solid.Further details concerning the physical models in the solid solver may be found in thesection on solid mechanics In Fig.2.5, we see several snapshots of two dimensionalslices of pressure in the ¤uid and the solid Att = 0.93µs we clearly see that the shockfront of the detonation lags in the solid due to the slower sound speed in the solid At
t = 3.7µs the shock front is within the tantalum target, and we see a diamond shapedpattern of waves in the ¤uid Also shown in Fig2.6are snapshots of density in the ¤uidand levels of plasticity in the solid
Trang 24CHAPTER 2 INTEGRATED SIMULATION CAPABILITY 20
Figure 2.5: Snapshots of detonation in a tantalum cylinder From top to bottom thetimes are0.18µs, 0.93µs, 1.8µs, 3.7µs The variable shown is pressure on the centerplane
2.3.4 Future work
In the future, we will implement an adaptive mesh re£nement version of the ¤uid solverand the ¤uid-solid coupling using the GrACE framework
In FY 00 we further enhanced the capabilities of the Center’s solid solveradlib.The most important change is thatadlibnow works as a fully parallel solid mechan-ics solver The full capabilities of adlib and their status as regards parallelization areshown in Fig.2.7 As can be seen in the £gure the mechanics is now a fully parallelcomponent In addition progress has also been made on producing a parallel version ofthe fragmentation and contact capability This will be completed in FY’01 Still to becompleted is the ability to perform dynamic adaptive meshing in parallel At present
we are constructing a mesh subdivision capability to address this issue and plan to have
it fully integrated into the solver in FY’02
At present the following capabilities have been successfully implemented in thesolver:
• Solid modeling and scalable unstructured parallel meshing
• Fully Lagrangian £nite element formulation
• Parallel explicit dynamics based on domain decomposition
• Serial adaptive re-meshing based on error estimation
Trang 25• Shock physics capability implemented vis arti£cial viscosity
• Fully parallel 3-D coupling with the ¤uid solver
• Serial non-smooth contact and fragmentation based on cohesive elements
• Full integration into the Pyre problem solving environment
We comment upon some of these capabilities below
2.4.1 Shock capturing capability
In order to successfully simulate shock propagation in solids we have developed andimplemented a robust shock capturing approach based on the work of Wilkins The ar-ti£cial viscosity provides the requisite dissipation to smooth shock discontinuities over
a width comparable to a few mesh lengths but otherwise leaves the solution unaffected.The viscosity is given by