1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Volume 20 - Materials Selection and Design Part 12 pdf

150 405 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 150
Dung lượng 2,86 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Irrespective of the complexity, most models are used to gain one or more of the following advantages: • Reduce iterations on the shop floor • Optimize an existing process • Understand

Trang 1

• Steel Castings Handbook, 6th ed., ASM International, 1995

Deformation Processes

• T Altan, S.-I Oh, and H Gegel, Metal Forming: Fundamentals and Applications, American Society

for Metals, 1983

• T.Z Blazynski, Ed., Plasticity and Modern Metal Forming Technology, Elsevier, 1989

• T.G Byrer, Ed., Forging Handbook, Forging Industry Association, Cleveland, 1985

• Forming, Vol 2, Tool and Manufacturing Engineers Handbook, 4th ed., Society of Manufacturing

Engineers, 1984

• Forming and Forging, Vol 14, ASM Handbook, ASM International, 1988

• S.K Ghosh and M Predeleanu, Ed., Materials Processing Defects, Elsevier, 1995

• W.F Hosford and R.M Caddell, Metal Forming: Mechanics and Metallurgy, 2nd ed., Prentice Hall,

1993

• K Lange, Ed., Handbook of Metal Forming, McGraw-Hill, 1985 (now SME)

• O.D Lascoe, Handbook of Fabrication Processes, ASM International, 1988

• Z Marciniak and J.L Duncan, The Mechanics of Sheet Metal Forming, Edward Arnold, 1992

• E Mielnik, Metalworking Science and Engineering, McGraw-Hill, 1991

• R Pearce, Sheet Metal Forming, Adam Hilger, 1991

• J.A Schey, Tribology in Metalworking: Friction, Lubrication and Wear, American Society for

Metals, 1983

• D.A Smith, Ed., Die Design Handbook, 3rd ed., Society of Manufacturing Engineers, 1990

• R.H Wagoner, K.S Chan, and S.P Keeler, Ed., Forming Limit Diagrams, TMS, Warrendale, PA,

1989

• R.A Walsh, Machining and Metalworking Handbook, McGraw-Hill, 1994

Powder Processing

• H.V Atkinson and B.A Rickinson, Hot Isostatic Pressing, Adam Hilger, 1991

• G Dowson, Powder Metallurgy: The Process and its Products, Adam Hilger, 1990

• R.M German, Powder Metallurgy Science, Metal Powder Industries Federation, 1985

• C Iliescu, Cold-Pressing Technology, Elsevier, 1990

• H.A Kuhn and B.L Ferguson, Powder Forging, Metal Powder Industries Federation, 1990

• M.H Liebermann, Rapidly Solidified Alloys, Dekker, 1993

• Powder Metallurgy, Vol 7, ASM Handbook, American Society for Metals, 1984

• Powder Metallurgy Design Manual, 2nd ed., Metal Powder Industries Federation, 1995

Machining Processes

• G Boothroyd and W.W Knight, Fundamentals of Machining and Machine Tools, 2nd ed., Dekker, 1989

• Machining, Vol 1, Tool and Manufacturing Engineers Handbook, 4th ed., Society of Manufacturing

Engineers, 1983

• Machining, Vol 16, ASM Handbook, ASM International, 1989

• S Malkin, Grinding Technology: Theory and Applications, Ellis Horwood, 1989

• P.L.B Oxley, The Mechanics of Machining, Ellis Horwood, 1989

• M.C Shaw, Metal Cutting Principles, 4th ed., Oxford University Press, 1984

• D.A Stephenson and J.S Agapiov, Metal Cutting Theory and Practice, Dekker, 1996

• R.A Walsh, Machining and Metalworking Handbook, McGraw-Hill, 1994

Joining Processes

• Adhesives and Sealants, Vol 3, Engineered Materials Handbook, ASM International, 1990

Trang 2

• Brazing Handbook, 4th ed., American Welding Society, 1991

• G Humpston and D.M Jacobson, Principles of Soldering and Brazing, ASM International, 1993

• D.L Olson, R Dixon, and A.L Liby, Ed., Welding Theory and Practice, North Holland, 1990

• R.O Parmley, Ed., Standard Handbook of Fastening and Joining, 3rd ed., McGraw-Hill, 1997

• A Rahn, The Basics of Soldering, Wiley, 1993

• M Schwartz, Brazing, ASM International, 1987

• Welding, Brazing, and Soldering, Vol 6, ASM Handbook, ASM International, 1993

• Welding Handbook, 8th ed., American Welding Society, 1996

Ceramics Processing

• Ceramics and Glasses, Vol 4, Engineered Materials Handbook, ASM International, 1991

• Engineered Materials Handbook Desk Edition, ASM International, 1995

• S Musikant, What Every Engineer Should Know about Ceramics, Dekker, 1991

• G.C Phillips, A Concise Introduction to Ceramics, Van Nostrand-Rheinhold, 1991

• J.S Reed, Principles of Ceramics Processing, 2nd ed., Wiley, 1995

• M.M Schwartz, Ceramic Joining, ASM International, 1993

• M.M Schwartz, Handbook of Structural Ceramics, McGraw-Hill, 1992

• R.A Terpstra, P.P.A.C Pex, and A.H DeVries, Ed., Ceramic Processing, Chapman & Hall, 1995

Polymer Processing

• R.J Crawford, Ed., Rotational Moulding of Plastics, Wiley, 1992

• Engineering Plastics, Vol 2, Engineered Materials Handbook, ASM International, 1988

• R.G Griskey, Polymer Process Engineering, Chapman & Hall, 1995

• Handbook of Plastics Joining: A Practical Guide, Plastics Design Library, 1996

• N.C Lee, Ed., Plastic Blow Molding Handbook, Van Nostrand-Rheinhold, 1990

• N.G McCrum, C.P Buckley, and C.B Bucknall, Principles of Polymer Enginering, Oxford University

Press, 1989

• E.A Muccio, Plastics Processing Technology, ASM International, 1994

• Plastic Parts Manufacturing, Vol 8, Tool and Manufacturing Engineers Handbook, Society of

Manufacturing Engineers, 1995

• R.C Progelhof and J.L Throne, Polymer Engineering Principles: Properties, Tests for Design, Hanser,

Munich, 1993

• G.W Pye, Injection Mold Design, Longman/Wiley, 1989

• D.V Rosato, D.P DiMattia, and D.V Rosato, Designing with Plastics and Composites: A Handbook, Van

Nostrand-Rheinhold, 1991

Manufacture of Composites

• Composites, Vol 1, Engineered Materials Handbook, ASM International, 1987

• L Hollaway, Handbook of Polymer Composites for Engineers, Woodhead, Cambridge, 1994

• B.Z Jang, Advanced Polymer Composites: Principles and Applications, ASM International, 1994

• M.M Schwartz, Composite Materials Handbook, McGraw-Hill, 1992

• M.M Schwartz, Handbook of Composite Ceramics, McGraw-Hill, 1992

• M.M Schwartz, Joining of Composite Matrix Materials, ASM International, 1994

• W.A Woishnis, Ed., Engineering Plastics and Composites, 2nd ed., ASM International, 1993

Trang 3

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Introduction

MANUFACTURING PROCESSES typically involve the reshaping of materials from one form to another under a set of processing conditions To minimize the production cost and shorten the time to market for the product, all iterations in terms of an appropriate set of operating conditions should not be done on the shop floor Predictive models need to be used generously to perform numerical experiments to give an insight into the effect of the operating conditions on the properties of the final product

Process models must be able to build the geometry of the product/process that is being modeled, accurately describe the physics of the process, and be able to analyze the results in a way that is comprehensible by manufacturing engineers There are several types of models that are used by the industry These include models used as a tool in the course of scientific research, models that are very generalized in nature and can be applied to a wide variety of processes, but may not be able to address the nuances of any one process, models that are very specific in nature and can address a narrow range of operating conditions, models that rely on gross phenomena and are about 90% accurate but take only 10% of the execution time or more accurate models Irrespective of the complexity, most models are used to gain one or more of the following advantages:

• Reduce iterations on the shop floor

• Optimize an existing process

• Understand an existing process better

• Develop a new process

• Improve quality by reducing the variability in the product and process

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Classification of Models

The following points need to be considered in order to classify modeling problems:

• The physical phenomena affecting the process under consideration

• Mathematical equations describing the physical process

• Data needed to solve the equations

• Numerical algorithm to solve the equations given the boundary conditions and the constitutive behavior

• Availability of the software to provide answers

One of the most common classification methods is by type of process or physics This means that one must identify the major phenomena occurring in the process, for example, convection, radiation, chemical reaction, diffusion, deformation, and so forth Once the phenomena has been identified, the process needs to be defined in terms of mathematical equations, typically partial differential equations These equations are dependent on time, space, field variables, and internal states Ordinary differential equations can be used if the problem can be simplified so that the shape is not important and a lumped-parameter model can be used Several people have used lumped-parameter models for various materials processes (Ref 1, 2)

Trang 4

The requirements for particular data and the way in which it is gathered is an important step in the construction of a model Researchers typically play down this step as an "industrial implementation detail." This means that the rest of the model needs to be robust and accurate before data are needed Once accurate data are available, the result of the modeling effort will be good too On the other hand, industrial practitioners place a greater emphasis on data gathering because they know the difficulties and time involved in gathering data on production-scale equipment

Numerical algorithms to solve the differential equations consist of meshed-solution methods and lumped-parameter models The major meshed-solution models consist of finite differences, finite elements, and boundary methods Each of them is more appropriate for different types of equations and boundary conditions Within these methods, one can use a structured or an unstructured mesh Structured meshes are created by using rectilinear, bricklike elements It is easy to use this type of mesh; however, fine geometry details may be missed Unstructured meshes can be of any shape tetrahedra, bricks, hexahedral, prisms, and so forth Many of the disadvantages of using a structured mesh are eliminated through this type of a mesh

Lumped-parameter models may help in understanding the effect of certain parameters on the process as along as the problem formulation does not change These models do not model spatial variation directly, and the parameters may or may not be physically meaningful in themselves

Choice of the appropriate software is an important aspect of the usefulness of the model Almost without exception, research process models and all commercial software can be written directly in a third-generation language (Fortran, Lisp, Pascal, C, C++) User interfaces can be derived from various libraries Because of the large number of calculations necessary to get the desired degree of detail, models may require parallel computing hardware for cost-effective solutions Software developed for this has to be able to run and make use of the parallel-processing capabilities of the hardware

Models for manufacturing processes can be classified in two primary ways, as shown in Fig 1 One classification scheme considers whether the model is on-line or off-line; the other considers whether the model is empirical, mechanistic, or deterministic

Fig 1 Classification of models for manufacturing processes

Fully on-line models are part of the bigger process control system in a plant Sensors and feedback loops are characteristics of these models They get their input directly from the system These models implement changes in the plant on a continuous basis (Ref 3) Fully on-line models are extremely fast and reliable; therefore, these models need to

be rather simple without the need to do any significant numerical calculations For these models to be reliable, the physics

of the process that they are addressing must be understood thoroughly A good example of this type of model is the spray water system on a slab-casting machine, which is designed to deliver the same total amount of water to each portion of the strand surface The flow rate changes to account for variations in the casting-speed history experienced by each portion as it passes through the spray zones

Semi-on-line models are similar to fully on-line models The distinguishing factor is that rather than the model taking appropriate action, the process engineer analyzes the situation and performs any corrective action, if needed These models are typically slightly more complex than the fully on-line models because they are essentially run by an operator These models need to have an excellent user interface and should require minimum user intervention

Trang 5

Off-line models are typically used in the premanufacturing stage, that is, during research, design, or process parameter determination These models help to gain an insight into the process itself and thereby help optimize it There are many general-purpose models as well as models designed to be used for very specific applications These models are typically very complex and therefore need to be validated thoroughly before being used for any real predictions

Literature models are those that exist primarily in the literature and are seldom used in conjunction with experiments Typically these models are developed and run by the same individual The advantage of literature models is that other developers can benefit from them instead of starting from scratch

Empirical models are developed through statistical data gathering on a number of similar events The model does not help one understand the process itself and may not be valid beyond the range of the available data

Mechanistic models are based on the solution of the mathematical equations that represent the physics of the process that is being addressed These models are very extensible and study the effect of a variety of external factors on the process

Neural network models are based on artificial neural networks and provide a range of powerful techniques for solving problems in pattern recognition, data analysis, and control Neural networks represent a complex, trainable, nonlinear transfer function between inputs and outputs This allows an effective solution to be found to complex, nonlinear problems such as heat distribution

References cited in this section

1 M.F Ashby, Physical Modeling of Materials Problems, Mater Sci Technol., Vol 8 (No 2), 1992, p 102-111

2 H.R Shercliff and M.F Ashby, Modeling Thermal Processing of Al Alloys, Mater Sci Technol., Vol 7 (No

1), 1991, p 85-88

3 B.G Thomas, Comments on the Industrial Application of Process Models, Materials Processing in the Computer Age II, V.R Voller, S.P Marsh, and N El-Kaddah, Ed., The Minerals, Metals & Materials

Society, 1995, p 3-19

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Important Aspects of Modeling

There are several important issues that need to be addressed to understand what is important in modeling in general and what is important to the current problem in particular Some of these are briefly discussed below

Analytical versus Meshed Models. Developing an analytical or a closed-form solution model may be advantageous

in many instances However, it may be necessary to construct a discrete meshed model for finite element or finite difference calculations if the modeled volume:

• Has a complex shape (commonly found in many engineering applications)

• Contains different phases and grains, which are typically modeled by research groups (Ref 4)

• Contains discontinuous behavior such as a phase change, which can be handled easily with meshes using a volume-of-fluid (VOF) technique (Ref 5)

• Has nonlinear process physics such as when the heat transfer coefficient is a nonlinear function of the temperature

Trang 6

In many instances, meshed models are supplemented by some nonmeshed symbolic or analytical modeling This is done

in order to decide on appropriate boundary conditions for the meshed part of the problem because it is the boundary conditions that effectively model the physical problem and control the form of the final solution (Ref 6)

Analytic models are always useful for distinguishing between mechanisms that have to be modeled as a coupled set and mechanisms that can be modeled separately These models no longer need closed-form solutions Even simple computers can track evolving solutions and iterate to find solutions to implicit formulations (Ref 1)

Boundary Conditions and Multiphysics Models. Application of appropriate boundary conditions is a major part of the activity of process modeling Boundary conditions are statements of symmetry, continuity and constancy of temperature, heat flux, strain stress, and so forth Boundary conditions need to be set at a very early stage in analytical models In meshed models, these are typically represented separate from the main equations and are decoupled to some extent from the model itself Therefore, sensitivity analysis can be done much easier using meshed methods

The type of boundary conditions used also determines what solving algorithm should be used for the partial differential equations This determines the speed, accuracy, and robustness of the solution

Material Properties. All process models require material properties to be simulated Acquiring these properties can be difficult and expensive (Ref 7) A sensitivity analysis of the model with respect to these data provides information as to the importance of minor changes in them In many instances, it may be possible to use models with doubtful material property information in order to predict trends, as opposed to determining actual values A problem arises if the material properties are extrapolated beyond the range of their applicability where one does not know the behavior of the material at all Related information is provided in the articles "Computer-Aided Materials Selection" and "Sources of Materials Properties Data and Information" in this Volume

Modeling Process Cycles. The process of modeling is done in different cycles Figure 2 attempts to distinguish those cycles (Ref 8) This figure shows three loops suggesting three levels of activities in any modeling effort The outer loop is managed by someone close to the process who understands the business context of the problem and can concentrate on specifying the objective and providing the raw data The innermost loop (shaded dark) requires mostly computational skills while the middle loop (shaded light) consists of activities balancing the other two It may very well happen that all three of some combination of the activities can be done by the same person However, very seldom is that the case This highlights the need for forming modeling teams where all aspects of the problem can be addressed rigorously It also emphasizes the importance of training and appropriate software tool development so that the input and output of the tools can be easily understood by all involved in the process

Trang 7

Fig 2 Modeling cycles Source: Ref 8

References cited in this section

1 M.F Ashby, Physical Modeling of Materials Problems, Mater Sci Technol., Vol 8 (No 2), 1992, p 102-111

4 M Rappaz and Ch.-A Gandin, Probabilistic Modeling of the Microstructure Formation in Solidification

Processes, Acta Metall Mater., Vol 41 (No 2), 1993, p 345-360

5 J Wang, S Xun, R.W Smith, and P.N Hansen, Using SOLVA-VOF and Heat Convection and Conduction

Technique to Improve the Casting Design of Cast Iron, Modeling of Casting, Welding and Advanced Solidification Processes VI, T Piwonka, Ed., TMS, 1993, p 397-412

6 L Edwards and M Endean, Ed., Manufacturing with Materials, Materials in Action, Butterworth Scientific,

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Modeling of Deformation Processes

Finite element analysis (FEA) of deformation processes can provide an insight into the behavior of the product under various processing conditions and can help optimize the conditions to get the desired properties It can also help understand the performance of the product before the part is put in actual use Common problems solved by FEA include

Trang 8

insufficient die filling, poor shape control, poor flow of material, cracks and voids that lead to fracture and poor final part properties

The occurrence of typical processes in a forging operation are shown in Fig 3 Figure 4 (Ref 9) shows a schematic representation of the interactions between the major process variables in metal forming From Fig 4 it can be seen that for

a metal-forming analysis, one needs to satisfy the equilibrium conditions, compatibility equations/strain-displacement relations, constitutive equations, and, in some instances, the heat balance equation In addition, one needs to apply appropriate boundary conditions These may comprise displacement/velocity imposed on a part of the surface while stress

is imposed on the remainder of the surface, heat transfer, or any other interface boundary condition

Fig 3 Typical physical phenomena occurring during a forging operation

Trang 9

Fig 4 Interaction among major process variables during forming Source: Ref 9

Relevant Equations. The equilibrium equations describing the various forces acting on the body are given as:

(Eq 1)

where is the normal stress component, is the shear stress component, and F is the body force/unit volume component

Similarly, the strain-displacement relationships are given as:

Trang 10

Fig 5 Schematic stress/strain curves for various materials

Fig 6 Stress/strain curve for a typical metal

Trang 11

Finite Element Analyses. One of the most commonly used techniques to solve the various equations (as shown earlier) is finite element analysis (FEA) It is a numerical technique that approximates the mathematical equations that in turn approximate reality It is a discrete representation of a continuous system It breaks the bigger problem into a number

of smaller ones and, therefore, is a piecewise representation of the problem

Some of the common metal-forming problems that are solved by FEA are insufficient die fill, poor shape control, poor flow of materials, prediction of cracks and voids that lead to fracture, and poor final part properties Finite element analysis has many advantages over the typical closed-form solutions It provides greater insights into the behavior of the product and the process, gives a good understanding of the performance of the product before the actual usage, is useful

in product and process optimization, is a powerful and mature design and analysis tool, and most importantly, gives solutions to irregular shapes, variable material properties, and irregular boundary conditions However, FEA is not to be viewed as a solution to all problems or as a substitute for common sense and experience

Typical costs of FEA can run into thousands of dollars A rough estimate is that if brainstorming costs $1, then refined hand calculations would cost $2, "quick and dirty" process models would cost $5, and detailed process models would cost

$30 However, the probability of success of detailed process models is much higher compared to that of brainstorming, and great savings in costs of experimentation and rework may be achieved

There are several steps involved in conducting these analyses A brief description of each is provided below More detailed information is provided in the article "Finite Element Analysis" in this Volume

Defining the Problem. Before a full-fledged FEA is undertaken, several questions need to be answered:

• Is FEA appropriate?

• What is desired from the FEA?

• When are results needed?

• What are the product and process limitations?

If it is determined that FEA would provide adequate answers, information is gathered to start the modeling process The required information includes the geometry of interest, initial and boundary conditions, material properties and material behavior models, and an approximate solution to ensure that the finite element results are not physically absurd

The preprocessing stage defines the physical problem and converts it into a form that the computer can solve The definition of the physical problem involves fully defining the geometry as well as the material of the part to be simulated

Next comes defining the physics of the process This involves using the appropriate set of mathematical equations and the corresponding initial conditions and boundary conditions Subsequently, the solution method needs to be determined This involves choosing the appropriate algorithms to solve the numerical approximations of the mathematical equations Finally, because FEA is a numerical approximation of reality, the problem needs to be discretized, (i.e., the continuum needs to be broken into many smaller pieces, the sum of which will represent the whole problem) In this stage, the shape

of these smaller pieces, or elements, their size, and their complexity needs to be determined

Discretization is the process of subdividing the whole geometry into discrete parts The discrete parts are called elements Many different types of elements exist depending on their shape, linearity, and order The factors that impact the selection of the elements are the type of the problem, geometry, accuracy desired, availability within the algorithm, nature of the physical problem, and user familiarity The size and number of elements are primarily determined by the various gradients (temperature, stress, etc.) in the system For example, if the gradients are steep, a larger number of smaller-sized elements should be used An example of discretization is shown in Fig 7 For clarity, the node and element numbers are not shown in the mesh

Trang 12

Fig 7 The finite element analysis discretization process

The mesh can be made coarser (fewer elements) or finer (more elements) depending on the needs of the problem Coarser meshes use minimal computer resources in terms of storage and run time However, because the representation is approximate, the results can be crude Finer meshes provide a more accurate representation with improved results

Typical elements are either linear or quadratic and can be one-, two- or three-dimensional Figure 8 schematically shows some typically used elements

Fig 8 Linear and quadratic elements used in typical finite element analyses

Post Processing. Once the equations have been solved by the computer program, huge amounts of data are generated that are typically not in a user-friendly format These data need to be organized and processed so as to make sense and give only the information that the user desires The most frequently analyzed data are metal flow, time-history plots, ductile fracture tendency, stress/strain plots, and load-stroke curves A good visualization scheme is crucial to understanding the outcome of the analysis

Requirements of a Good FEA Code. For an FEA code to be useful, it must be easy to use and provide results that are easy to understand A well-written user's manual or other documentation and good software support are also

Trang 13

important From a technical standpoint, the code must also be accurate, fast, and numerically stable under a wide range of conditions It must be sufficient to capture the physics of the problem and should be capable of being interfaced with other codes (as appropriate)

Example 1: Use of FEA to Study Ductile Fracture during Forging

Brittle fracture is sometimes observed after forging and heat treating large nickel-copper alloy K-500 (K-Monel) shafts (Ref 10) Inspection of failed fracture surfaces revealed that the cracks were intergranular and occurred along a carbon film that had apparently developed during a slow cooling cycle (such as during casting of the ingot stock) Cracks were observed in both longitudinal and transverse directions and arrested just short of the outer bar diameter Finite element analysis was used to determine the cause and remedy for this cracking In addition, new processing parameters were determined to eliminate cracking

A two-dimensional model of the bar was developed and discretized into a finite element mesh NIKE2D, a public domain, nonlinear FEA code from Lawrence Livermore National Laboratory (Ref 11), was used to calculate the stress plots under the forging conditions The stress/strain histories resulting from forging were required for ductile fracture analyses In this particular case, the Oyane et al ductile fracture criteria (Ref 12) were used These criteria attempt to predict ductile fracture based on a porous plasticity model

Figure 9 (Ref 10) shows in-process hydrostatic stress history at the center of a bar during forging with two rigid, parallel flat dies (Hydrostatic stress is the mean of any three mutually perpendicular normal stresses at a given point It is invariant with direction.) Various strain rate and temperature conditions are shown having similar trends At very low reductions (1.25%), the center of the bar is still within the elastic range and, therefore, does not experience large stresses

At 2.5% reduction, the outward movement of the free sides of the bar produces a high tensile hydrostatic stress at the center of the bar As reduction continues, the compressive stresses acting in the vertical direction begin to grow This acts

to reduce the hydrostatic stress at the center When the bar has been reduced by 11%, the hydrostatic stress is compressive

in all cases Beyond this reduction, the hydrostatic stress at the center of the bar continues to grow in a compressive manner

Fig 9 Hydrostatic stress at the bar center during forging Source: Ref 10

Figure 10(b) (Ref 10) shows a map of ductile fracture accumulation after one forging pass for air-melted material at 930

°C and a strain rate of 10 s-1 Notice that the maximum damage is located at the center of the bar After this pass, the bar

Trang 14

would be rotated and further reductions would be taken Damage would accumulate during these subsequent passes Ductile fracture was predicted to occur at the bar center from the FEA results after six reducing passes under the specified conditions One possible way of eliminating this ductile fracture is to use a V-shaped die assembly as shown in Fig 10(a)

In this case, the fracture does not occur at the center, but at the edges and gets an opportunity to heal as the bar is rotated during subsequent passes

Fig 10 Ductile fracture map of a nickel-copper alloy K-500 bar at 10% reduction, 930 °C, and a strain rate of

10.0 s -1 using the Oyane et al ductile fracture criteria (a) Three forging dies (b) Two forging dies Source: Ref

to a predesigned forming schedule, pressure is released from one side of the sheet, with the pressure differential pushing the sheet into the die cavity As the sheet freely forms into the die cavity, thinning occurs relatively uniformly Once the sheet makes contact with the die, frictional effects begin to make the thinning less uniform

The constitutive behavior of the superplastic material (aluminum alloy 7475) is expressed in the form of a simple law equation:

where is the strain rate, is the stress, and A and n (= 1/m) are material constants The above equation assumes that

grain size remains constant during forging Critical regions, where grain coarsening and an unacceptable amount of cavitation take place, can also be modeling by expressing the material behavior in a more rigorous form:

(Eq 4)

where A' is a material constant, D is the diffusion coefficient, G is the shear modulus, b is the Burgers vector, k is the

Boltzmann's constant, T is the absolute temperature, d is the grain size, p is the grain size exponent, is the flow stress, n

is the strain-hardening exponent, Q is the activation energy, and R is the universal gas constant Cavitation may be

expressed as a function of accumulated plastic strain as:

Trang 15

where Co is the initial void volume, K3 is a constant, and is the accumulated plastic strain

Several subroutines were developed and implemented in a commercially available FEA code (Ref 13) Several trial forming problems were solved to show the benefits of various enhancements to the FEA code Simple geometries were first taken to allow comparison of simple analytical solutions with the compared results Subsequently, complex geometries were discussed to show where the simple analytical solutions broke down and where the FEA results give a significant insight into the forming process

References cited in this section

9 S Kobayashi, S Oh, and T Altan, Metal Forming and the Finite Element Method, Oxford University Press,

1989, p 27

10 M.L Tims, J.D Ryan, W.L Otto, and M.E Natishan, Crack Susceptibility of Nickel-Copper Alloy K-500

Bars During Forging and Quenching, Proc First International Conference on Quenching and Control of Distortion, ASM International, 1992, p 243-250

11 J.O Hallquist, "NIKE2D A Vectorized Implicit, Finite Deformation Finite Element Code for Analyzing the Static and Dynamic Response of 2-D Solids with Interactive Rezoning and Graphics," User's Manual, UCID-19677, Rev 1, Lawrence Livermore National Laboratory, 1986

12 S.E Clift, P Hartley, C.E.N Sturgess, and G.W Rowe, Fracture Prediction in Plastic Deformation

Processes, Int J Mech Sci., Vol 32 (No 1), p 1-17

13 D.K Ebersole and M.G Zelin, "Superplastic Forming of Aluminum Aircraft Assemblies Simulation Software Enhancements," NCEMT Report No TR 97-05, National Center for Excellence in Manufacturing Technologies, Johnstown, PA, 1997

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Modeling of Casting Operations

Of late, considerable developments have taken place in the field of solidification modeling of casting processes In the current state-of-the-art solidification simulation, several software packages are available to analyze the solidification behavior in complex-shaped castings These packages make use of several different approaches for solving the various problems associated with casting processes

An overall architecture of a comprehensive solidification modeling system is shown in Fig 11 This figure (Ref 14) depicts the various modules available in the current state-of-the-art solidification simulation of casting processes, the information available from each module, and the interconnection between the modules It is evident from the figure that the initial casting design is linked to a module called the quick-analysis module Here, one can make use of approximate analysis schemes, such as the modulus approach (Ref 15), which uses geometry-based considerations to provide valuable insights into the solidification times and, therefore, the propensity for defect formation during solidification

Trang 16

Fig 11 Overall architecture of the modeling of casting processes Source: Ref 14

The next stage is to design the rigging system for the casting, which includes the design of the gate, risers, downsprue, and so forth This is currently based on the "rules of thumb" of foundry experts and empirical charts Once the rigging design is established, the stage is set for solidification simulation Here, the continuum mechanics problem of heat, mass, and momentum transfer are solved for the casting process simulation Thus, one obtains the cooling history of the casting

Subsequently, one can obtain information about the microstructure in the casting by coupling with the module for microstructure evolution Further, the simulation data can be postprocessed using special-purpose models for defect prediction that enable one to visualize the defects under a given set of processing conditions Apart from porosity-type defects, prediction of other defects such as macrosegregation is possible Because macrosegregation primarily occurs due

to the movement of solid phase by convection during solidification, solving the fluid-flow equations in the mushy zone provides a solution

Modeling the development of stresses in the casting has been another area of great challenge Of late, several researchers have addressed the issue of development of stresses during and after solidification, which is often the cause of distortion

in castings This is especially the case for highly nonequilibrium processes, such as die-casting Special numerical algorithms and techniques are being developed for handling more complex casting processes For example, in large structural thin-walled castings, the normal solution methods would require an extremely large number of elements or nodes in the mesh, which significantly increase the computation time

Quick-Analysis Schemes. Traditionally, for sand-casting analysis, the use of geometric methods has been known as the section modulus approach The fundamental basis of geometric modeling is the relationship between the solidification

time (tf) and a geometric parameter, called the section modulus (given by volume-to-surface area ratio, V/A), as given by

Chvorinov's Rule (Ref 16):

(Eq 6)

where C is a constant for a given metal-mold material and mold temperature

For simple shapes, the modulus in Eq 6 can be calculated from the ratio of volume and surface area involved in cooling However, for complex shapes discretized in a three-dimensional grid, the continuous distribution of modulus can be

Trang 17

determined using the concept of distance from the mold, as discussed in Ref 15 The modulus at each point in the casting

is determined by the relation:

in knowledge-based analysis of manufactured components (Ref 21)

However, the drawback of most of the currently available design tools is the lack of a fully integrated system for the design of gates and risers, followed by a comprehensive process simulation More recently, an attempt has been made to close this gap by developing a system for automatic rigging design (Ref 22) The drawback of this system is that the final design (which includes the part and rigging) is in the form of a finite difference mesh model and not a solid model, which inhibits automatic pattern generation Currently, efforts are underway to overcome this problem

In these methods for rigging design, the first step is the generation of the solid model of the part This solid model is used

to generate a discretized description of the part, in the form of a finite difference or finite element mesh The gating design is achieved by applying some empirical heuristics to the discretized solid model, as well as performing a geometric analysis to determine the natural flow paths for liquid metal These empirical heuristics include rules for design of runners, sprues, and gates

As a general rule, the casting should be gated and fed in a manner to ensure progressive solidification of the casting There should be an adequate supply of molten metal to feed every section as it solidifies The solidification should start at the location furthest from the ingate/casting junction and proceed toward the risers, which should solidify last The design

of risers involves a geometric analysis of the casting using empirical relations such as Chvorinov's Rule to obtain the solidification time profile, followed by determination of the size and location of the riser The size of the riser is decided, based on the feed metal requirements as well as the solidification time

The Comprehensive Problem: Fluid Flow and Heat Transfer. Knowledge about fluid flow during the filling of casting is important, for it affects heat transfer both during and after filling The information obtained could help avoid problems of cold shuts, where the melt solidifies before filling a void, or where a molten front of liquid comes in contact with a solidified metal A mold-filling simulation is, therefore, indispensable if a high-quality casting-solidification analysis is desired There are several computational techniques to simulate fluid flow during mold filling, some of which follow

Momentum Balance Technique: The Solution Algorithm-Volume of Fluid Approach. To obtain an accurate profile of the velocity distribution in the mold during the filling of castings, one has to solve the governing equations The main governing equations that need to be solved to track the free surface and obtain the velocity distribution in the melt are the continuity equation, the Navier-Stokes equation, and the equation for free-surface tracking The continuity equation is given by:

where v represents the velocity The Navier-Stokes equation is:

Trang 18

(Eq 9)

where P is the fluid pressure, is the stress tensor, g is the acceleration due to gravity, and is the density The equation

for free-surface tracking is:

The thermal history inside the casting and mold is obtained by solving the energy equation:

(Eq 11)

where CP is the specific heat, q is the heat flux, and is the rate of heat generation

The boundary conditions can be of three types In the first type, the temperature at the boundary is specified, and in the second, the heat flux is specified More popularly, a third type of boundary condition is used, expressing the heat loss at the interface through a heat transfer coefficient:

(Eq 12)

where h is the effective heat transfer coefficient across the mold-metal interface, T is the casting surface temperature, and

To is the mold surface temperature

Recently, an alternative method of solving the volume-of-fluid equation through an analogy between the numerical treatment of filling and solidification has been developed (Ref 23) In this method, an alternative volume-of-fluid

equation has been proposed, based on an enthalpy-type variable to determine the function F Encouraging results have

been obtained with this approach

Modeling Microstructural Evolution. The solution to Eq 11 requires knowledge of the term (the rate of latent heat evolution during solidification) This term can be described in two ways: specific heat method or latent heat method

Specific Heat Method. In this classical approach, the latent heat is released by assuming that the solid forms in a specified temperature range The specified heat is modified as:

(Eq 13)

where L is the latent heat and dfs/dT is the rate of change of solid fraction with temperature, obtained from the phase

diagram Alternatively, one could use the Scheil equation to express solid fraction evolution as:

Trang 19

(Eq 14)

where Tf is the fusion temperature of the pure metal, Tl is the liquidus temperature, and k is the partition coefficient of the

alloy These parameters are obtained from the phase diagram for the alloy Using the above equation, one can estimate the latent heat released due to the evolution of the primary phase

The limitation of the above approach is that one cannot obtain information regarding the microstructure, such as grain size New approaches were developed in the last decade to overcome this limitation (Ref 24, 25) These approaches incorporate the metallurgy of solidification into the simulations In one such approach, known as the latent heat method, it has been shown that, in order to obtain microstructural information, the evolution of fraction of solid should take into account not only temperature but also the kinetics of nucleation and growth

Latent Heat Method. In the latent heat method, the term in Eq 11 is evaluated based on the solidification kinetics of nucleation and growth as applicable to the transformations occurring in the system The expression for is given by:

(Eq 15)

where L is the latent heat of the solidifying phase and is the rate of evolution of fraction of solid Latent heat generation

can be determined through the use of mathematical expressions to describe the evolution of the solid phase The temperature history in the casting can then be obtained by solving Eq 11 Depending on the nature of the alloy, the expressions can vary The following section describes the solidification kinetics for equiaxed eutectic structures, which are commonly found in cast iron systems and aluminum-silicon systems

Probabilistic Models. Useful as they are, the deterministic models suffer from several shortcomings They neglect any aspect related to crystallographic effects So they are unable to account for the grain selection near the mold surface, which leads to the columnar region Furthermore, they do not account for the random nature of equiaxed grains One cannot visualize the actual evolution of the grains; one can only get an idea of the size of the grains

To overcome these limitations, several researchers have used an altogether different track to model microstructural evolution (Ref 4, 26) They make use of probabilistic models to simulate the evolution of grain structure These simulations are more like numerical experiments In one such work (Ref 26), a Monte Carlo procedure was used to simulate evolution of grain structure This type of method is based on the principle of minimization of energy, where the energy of a given structural configuration is evaluated considering the present state of the various sites (whether solid or liquid) Transitions are allowed to take place according to randomly generated numbers Using this technique, the researchers were able to compute two-dimensional microstructures that closely resembled those observed experimentally The main drawback of such an approach was the lack of a physical basis

In a more recent investigation (Ref 4), a new approach for modeling grain structure formation during solidification was proposed Based on a two-dimensional cellular automata technique, the model includes the mechanisms of heterogeneous nucleation and of grain growth Nucleation occurring at the mold wall, as well as in the liquid metal, is treated by using two distributions of nucleation sites The location and the crystallographic orientation of the grains are chosen randomly among a large number of cells and a certain number of orientation classes The model has been applied to small specimens of uniform temperature The columnar-to-equiaxed transition, the selection and extension of columnar grains that occur in the columnar zone, and the impingement of equiaxed grains are clearly shown by this technique

Prediction of Defects: The Porosity Problem. Analysis of the conditions leading to the occurrence of casting porosity has been the focus of a number of investigations in the past few decades With the advances in computer modeling of the casting process in recent years, there has been considerable interest in the usage of numerical heat transfer and solidification models to predict casting porosity

As far as the solidification parameters are concerned, the variables that control porosity may be narrowed down to the thermal gradient, the rate of solidification, the cooling rate, and the solidification time Based on these, various approaches have been suggested to predict casting porosity, the oldest being the empirical criteria Thermal parameters

Trang 20

have also been formulated recently, from Ref 27 and 28 Many of these criteria are based on d'Arcy's Law, approximating the mushy region to a porous medium The pressure drop in the mushy region is then expressed in terms of thermal criteria functions to predict the onset of porosity

The current modeling practice is to calculate these criteria functions using the solidification model to predict porosity Some of these functions are quite successful in predicting porosity in short-freezing-range alloys, though there are many difficulties in applying them for long-freezing-range-alloys Figure 12 (Ref 14) shows the Niyama distribution for a ductile iron plate casting The figure clearly shows the defects ending up in the riser, demonstrating adequacy of feeding

Fig 12 Distribution of Niyama values in a ductile iron plate casting, showing propensity of defects only in the

riser Source: Ref 14

A major limitation of the criteria functions (discussed previously) to predict porosity is that they ignore the effects of casting macrostructure and grain size on porosity The resistance to liquid feeding in the mushy region depends on the available surface area of solid in the interdendritic region, which is dictated by the macrostructure and grain size Recently, Suri et al have proposed a new number called the feeding resistance number (Ref 31), which takes into account the effect of casting macrostructure on final porosity The validity of this proposed criterion to predict porosity is still under investigation

Modeling Special Casting Processes: The Investment-Casting Process. Many critical and value-added components in automotive, aerospace, and other key industries are manufactured by special casting processes, such as the investment-casting process, lost-foam process, tilt-pour (Cosworth) process, and so forth Simulation of such processes requires the application of suitable submodels to handle the phenomenological aspects specific to each process In this section, some of the research efforts in modeling investment casting are reviewed

A comprehensive solidification simulation of the investment-casting process involves a number of computationally intensive steps, particularly the calculation of view factors (to model the radiation loss), and the three-dimensional analysis of mold-filling and solidification

The external heat loss is either purely radiative, or radiative as well as convective For the general case, the heat transfer coefficient can be given by:

where hr and hc are the radiative and convective heat transfer coefficients, respectively The radiative heat transfer coefficient is given by (Ref 32):

Trang 21

where c is a constant dependent on the surface geometry

View factor is defined as the fraction of the radiation that leaves surface i in all directions and is intercepted by surface j When two surfaces, dA1 and dA2 undergo radiation exchange, the view factor can be mathematically expressed as (Ref 32):

More recently, another technique has been proposed that enables quick calculation of the view factor distribution at the mold surface (Ref 17) This is a modified ray-racing technique, where a scheme is devised to send rays in various directions, and the number going into air without mold interception is estimated The view factor is then computed by calculating the fraction of rays that go into air This scheme has been successfully applied to three-dimensional finite difference geometries

References cited in this section

4 M Rappaz and Ch.-A Gandin, Probabilistic Modeling of the Microstructure Formation in Solidification

Processes, Acta Metall Mater., Vol 41 (No 2), 1993, p 345-360

14 G Upadhya and A.J Paul, Solidification Modeling: A Phenomenological Review, AFS Trans., Vol 94,

1994, p 69-80

15 G Upadhya, C.M Wang, and A.J Paul, Solidification Modeling: A Geometry Based Approached for

Defect Prediction in Castings, Light Metals 1992, Proceedings of Light Metals Div At 121st TMS Annual

Meeting (San Diego), E.R Cutshall, Ed., TMS, 1992, p 995-998

16 N Chvorinov, Theory of Solidification of Castings, Giesserei, Vol 27, 1940, p 17-224

17 G Upadhya, S Das, U Chandra, and A.J Paul, Modeling the Investment Casting Process: A Novel

Approach for View Factor Calculations and Defect Predictions, Appl Math Model., Vol 19, 1995, p

354-362

18 S.C Luby, J.R Dixon, and M.K Simmons, Designing with Features: Creating and Using Features Database

for Evaluation of Manufacturability of Castings, ASME Comput Rev., 1988, p 285-292

19 J.L Hill and J.T Berry, Geometric Feature Extraction for Knowledge-Based Design of Rigging Systems for

Light Alloys, Modeling of Casting, Welding and Advanced Solidification Processes V, TMS, 1990, p

Trang 22

321-328

20 J.L Hill, J.T Berry, and S Guleyupoglu, Knowledge-Based Design of Rigging Systems for Light Alloy

Castings, AFS Trans., Vol 99, 1992, p 91-96

21 R Gadh and F.B Prinz, Shape Feature Abstraction in Knowledge-Based Analysis of Manufactured

Products, Proc of the Seventh IEEE Conf on AI Applications, Institute of Electrical and Electronics

Engineers, 1991, p 198-204

22 G Upadhya, A.J Paul, and J.L Hill, Optimal Design of Gating and Risering in Castings: An Integrated

Approach Using Empirical Heuristics and Geometric Analysis, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS, 1993, p 135-142

23 C.R Swaminathan and V.R Voller, An "Enthalpy Type" Formulation for the Numerical Modeling of Mold

Filling, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS,

1993, p 365-372

24 D.M Stefanescu and C.S Kanetkar, Computer Modeling of the Solidification of Eutectic Alloys: The Case

of Cast Iron, Computer Simulation of Microstructural Evolution, D.J Srolovitz, Ed., TMS-AIME, 1985, p

171-188

25 D.M Stefanescu, G Upadhya, and D Bandyopadhyay, Heat Transfer-Solidification Kinetics Modeling of

Solidification of Castings, Metall Trans A, Vol 21A, 1990, p 997-1005

26 J.A Spittle and S.G.R Brown, Acta Metall., Vol 37 (No 7), 1989, p 1803-1810

27 E Niyama, T Uchida, M Morikawa, and S Saito, A Method of Shrinkage Prediction and Its Application to

Steel Casting Practice, AFS Cast Metal Res J., Vol 7 (No 3), 1982, p 52-63

28 Y.W Lee, E Chang, and C.F Chieu, Metall Trans., Vol 21B, 1990, p 715-722

31 H Huang, V.K Suri, N El-Kaddah, and J.T Berry, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS Publications, 1993, p 219-226

32 G.H Geiger and D.R Poirier, Transport Phenomena in Metallurgy, Addison Wesley, 1975, p 332

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Modeling of Fusion Welding Processes

In fusion welding, parts are joined by melting and subsequent solidification of adjacent areas of two parts Welding may

be performed with or without the addition of a filler metal

Figure 13 is a schematic diagram of the fusion welding process Three distinct regions in the weldment are observed: the fusion zone, which undergoes melting and solidification; the heat-affected zone, which experiences significant thermal exposure and may undergo solid-state transformation, but no melting; and the base-metal zone, which is unaffected by the welding process

Trang 23

Fig 13 Schematic representations of the fusion welding process

The interaction of the material and the heat source leads to rapid heating, melting, and vigorous circulation of the molten metal driven by buoyancy, surface tension, impingement or friction, and when electric current is used, electromagnetic forces The resulting heat transfer and fluid flow affect the size and shape of the weld pool, the cooling rate, and the kinetics and extent of various solid-state transformation reactions in the fusion zone and heat-affected zone The weld geometry influences dendrite and grain-growth selection processes Both the partitioning of nitrogen, oxygen, and hydrogen between the weld pool and its surroundings, and the vaporization of alloying elements from the weld-pool surface greatly influence the composition and the resulting microstructure and properties of the weld metal In many processes, such as the arc welding and laser-beam welding, an electrically conducting, luminous gas plasma forms near the weld pool

Energy Absorption. During welding, the workpiece absorbs only a portion of the total energy supplied by the heat source The absorbed energy is responsible for the outcome of the welding process The consequences of the absorbed energy include formation of the liquid pool, establishment of the time-dependent temperature field in the entire weldment, and the structure and properties of the weldment Therefore, it is very important to understand the physical processes in the absorption of energy during the welding process The physical phenomena that influence the energy absorption by the workpiece depends on the nature of the material, the type of heat source, and the parameters of the welding process

For arc welding, the fraction of the arc energy transferred to the workpiece, , commonly known as the arc efficiency, is given by (Ref 33):

(Eq 20)

where q is the heat absorbed by the workpiece, I and V are the welding current and voltage, respectively, qe is the heat

transferred to the electrode from the heat source, qp is the energy radiated and convected to the arc column per unit time

(of which a proportion n is transferred to the workpiece), and qw is the heat absorbed by the workpiece (of which a

proportion m is radiated away) For a consumable electrode, the amount of energy transferred to the electrode is

eventually absorbed by the workpiece Thus, the above equation is simplified to:

Trang 24

(Eq 21)

Fluid Flow in the Weld Pool. The properties of the weld metal are strongly affected by the fluid flow and heat transfer in the weld pool The flow is driven by surface tension, buoyancy, and, when electric current is used, electromagnetic forces (Ref 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45) In some instances, aerodynamic drag forces of the plasma jet may also contribute to the convection in the weld pool (Ref 46) Buoyancy effects originate from the spatial variation of the liquid-metal density, mainly because of temperature variations and, to a lesser extent, from local composition variations Electromagnetic effects are a consequence of the interaction between the divergent current path in the weld pool and the magnetic field that it generates This effect is important in arc and electron-beam welding, especially when a large electric current passes through the weld pool In arc welding, a high-velocity plasma stream impinges on the weld pool The friction of the impinging jet on the weld-pool surface can cause significant fluid motion at high currents Fluid flow and convective heat transfer are often very important in determining the size and shape of the weld pool, the weld macrostructures and microstructures, and the weldability of the material

Marangoni Force. The spatial gradient of surface tension is a stress, known as the Marangoni stress The spatial variation of the surface tension at the weld-pool surface can arise owing to variations of both temperature and composition Frequently, the main driving force for convection is the spatial gradient of surface tension at the weld-pool surface In most cases, the difference in surface tension is due to the temperature variation at the weld-pool surface For such a situation, the Marangoni stress can be expressed as:

(Eq 22)

where is the shear stress due to temperature gradient, is the interfacial tension, T is the temperature, and y is the

distance along the surface from the axis of the heat source If a boundary layer develops, the shear stress can also be expressed as (Ref 47):

(Eq 23)

where is the density, is the viscosity, and u is the local velocity

Buoyancy and Electromagnetic Forces. When the surface-tension gradient is not the main driving force, the

maximum velocities can be much smaller For example, when the flow is buoyancy driven, the maximum velocity, um, can be approximated by the following relation (Ref 48):

where g is the acceleration due to gravity, is the coefficient of volume expansion, T is the temperature difference, and

d is the depth For the values of T = 600 °C, g = 981 cm/s2, = 3.5 × 10-5/°C, and d = 0.5 cm, the value of um is 3.2 cm/s The existence of electromagnetically driven flow was demonstrated by Woods and Milner (Ref 49), who observed flow of liquid metal when current was passed in the metal bath by means of a graphite electrode In the case of electromagnetically driven flow in the weld pool, the velocity values reported in the literature are typically in the range of

2 to 20 cm/s (Ref 50) The magnitude of the velocities of both buoyancy and electromagnetically driven flows in the weld pool are commonly much smaller than those obtained for surface-tension-driven flows

Convection Effects on Weld-Pool Shape and Size. Variable depth of penetration during the welding of different batches of a commercial material with composition within a prescribed range has received considerable attention Often, the penetration depth is strongly influenced by the concentration of surface-active elements such as oxygen or sulfur in

steels These surface-active impurity elements can affect the temperature coefficient of surface tension, d /dT, the

resulting direction of convective flow in the weld pool (Ref 51), and the shape of the weld pool The interfacial tension in

Trang 25

these systems could be described by a formalism based on the combination of Gibbs and Langmuir absorption isotherms (Ref 52):

where is the interfacial tension as a function of composition and temperature, is the interfacial tension of the pure

metal at the melting point Tm, A is the temperature coefficient of surface tension for the pure metal, R and R' are the gas constants in appropriate units, T is the absolute temperature, s is the surface excess of the solute at saturation solubility,

k1 is the entropy factor, a i is the activity of the solute, and Ho is the enthalpy of segregation The calculated values (Ref

53) of surface tension for Fe-O alloys are shown in Fig 14 It is seen that for certain concentrations of oxygen, d /dT can

change from a positive value at "low" temperature to a negative value at "high" temperature This implies that in a weld

pool containing fairly high oxygen contents, d /dT can go through an inflection point on the surface of the pool Under

these conditions, the fluid flow in the weld pool is more complicated than a simple recirculation

Fig 14 Calculated values of surface tension for Fe-O alloys Source: Ref 53

The calculated gas tungsten arc weld (GTAW) fusion-zone profiles (Ref 53) for pure iron, and an Fe-0.03O alloy are shown in Fig 15 The results clearly show the significant effect of oxygen concentration on the weld-pool shape and the aspect ratio Near the heat source, where the temperature is very high, the flow is radially outward However, a short distance away from the heat source, where the temperature drops below the threshold value for the change in the sign of

d /dT, the flow reverses in direction The flow field is not a simple recirculation Although the qualitative effects of the

role of surface-active elements are known, the numerical calculations provide a basis for quantitative assessment of their role in the development of weld-pool geometry

Trang 26

Fig 15 Calculated GTAW fusion-zone profiles for pure iron and Fe-0.03O alloy Source: Ref 53

References cited in this section

33 J.F Lancaster, Metallurgy of Welding, Allen and Unwin, London, 1980

34 S Kou and Y Le, Metall Trans A., Vol 14, 1983, p 2245

35 C Chan, J Mazumdar, and M.M Chen, Metall Trans A, Vol 15, 1984, p 2175

36 G.M Oreper and J Szekely, J Fluid Mech., Vol 147, 1984, p 53

37 J Dowden, M Davis, and P Kapadia, J Appl Phys., Vol 57 (No 9), 1985, p 4474

38 A Paul and T DebRoy, Advances in Welding Science and Technology, S.A David, Ed., American Society

for Metals, 1986, p 29

39 M.C Tsai and S Kou, J Numer Methods Phys., Vol 9, 1989, p 1503

40 T Zacharia, S.A David, J.M Vitek, and T DebRoy, Weld J Res Suppl., Vol 68, 1989, p 510s

41 T Zacharia, S.A David, J.M Vitek, and T DebRoy, Metall Trans A, Vol 20, 1989, p 957

42 J Mazumdar and A Kar, Thermomechanical Aspects of Manufacturing and Materials Processing, R.K

Shah, Ed., Hemisphere, 1992, p 283

43 K Mundra and T DebRoy, Metall Trans B, Vol 24, 1993, p 145

Trang 27

44 K Mundra and T DebRoy, Weld J Res Suppl., Vol 72, 1993, p 1s

45 R.T Choo and Z Szekely, Weld J Res Suppl., Vol 73, 1994, p 25s

46 A Matsunawa, International Trends in Welding Science and Technology, S.A David and J Vitek, Ed.,

ASM International, 1993, p 3

47 C.J Geankoplis, Transport Processes and Unit Operations, Allyn and Bacon, 1983, p 186

48 J Szekely, Advances in Welding Science and Technology, S.A David, Ed., ASM International, 1986, p 3

49 R.A Woods and D.R Milner, Weld J Res Suppl., Vol 50, 1971, p 163s

50 Y.H Wang and S Kou, Advances in Welding Science and Technology, S.A David, Ed., ASM International,

1986, p 65

51 A.J Paul and T DebRoy, Metall Trans B., Vol 19, 1988, p 851

52 M.J McNallan and T DebRoy, Metall Trans B., Vol 22, 1991, p 557

53 T DebRoy and S.A David, Physical Processes in Fusion Welding, Rev Mod Phys., Vol 67 (No 1), 1995,

p 85-112

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

Conclusions

Much work has been done in developing techniques to simulate various manufacturing processes These efforts range from conducting basic materials research to advanced software engineering Through an optimum combination of these, tools can be developed that reduce manufacturing costs and increase quality In fact, many such commercial tools are available today and are being utilized in the manufacturing operations of many companies However, for modeling to have a significant impact on the manufacturing industry, it should be geared toward the small- and medium-sized enterprises that make up the bulk of the industry Issues such as personnel training, well-integrated software environments, and help in setting up problems are important Today, software developers and vendors are working to address these issues in an effort to enable users to take full advantage of the benefits that modeling and simulation can provide to the manufacturing industry

Modeling of Manufacturing Processes

Anand J Paul, Concurrent Technologies Corporation

4 M Rappaz and Ch.-A Gandin, Probabilistic Modeling of the Microstructure Formation in Solidification

Processes, Acta Metall Mater., Vol 41 (No 2), 1993, p 345-360

5 J Wang, S Xun, R.W Smith, and P.N Hansen, Using SOLVA-VOF and Heat Convection and Conduction

Technique to Improve the Casting Design of Cast Iron, Modeling of Casting, Welding and Advanced Solidification Processes VI, T Piwonka, Ed., TMS, 1993, p 397-412

Trang 28

6 L Edwards and M Endean, Ed., Manufacturing with Materials, Materials in Action, Butterworth Scientific,

10 M.L Tims, J.D Ryan, W.L Otto, and M.E Natishan, Crack Susceptibility of Nickel-Copper Alloy K-500

Bars During Forging and Quenching, Proc First International Conference on Quenching and Control of Distortion, ASM International, 1992, p 243-250

11 J.O Hallquist, "NIKE2D A Vectorized Implicit, Finite Deformation Finite Element Code for Analyzing the Static and Dynamic Response of 2-D Solids with Interactive Rezoning and Graphics," User's Manual, UCID-19677, Rev 1, Lawrence Livermore National Laboratory, 1986

12 S.E Clift, P Hartley, C.E.N Sturgess, and G.W Rowe, Fracture Prediction in Plastic Deformation

Processes, Int J Mech Sci., Vol 32 (No 1), p 1-17

13 D.K Ebersole and M.G Zelin, "Superplastic Forming of Aluminum Aircraft Assemblies Simulation Software Enhancements," NCEMT Report No TR 97-05, National Center for Excellence in Manufacturing Technologies, Johnstown, PA, 1997

14 G Upadhya and A.J Paul, Solidification Modeling: A Phenomenological Review, AFS Trans., Vol 94,

1994, p 69-80

15 G Upadhya, C.M Wang, and A.J Paul, Solidification Modeling: A Geometry Based Approached for

Defect Prediction in Castings, Light Metals 1992, Proceedings of Light Metals Div At 121st TMS Annual

Meeting (San Diego), E.R Cutshall, Ed., TMS, 1992, p 995-998

16 N Chvorinov, Theory of Solidification of Castings, Giesserei, Vol 27, 1940, p 17-224

17 G Upadhya, S Das, U Chandra, and A.J Paul, Modeling the Investment Casting Process: A Novel

Approach for View Factor Calculations and Defect Predictions, Appl Math Model., Vol 19, 1995, p

354-362

18 S.C Luby, J.R Dixon, and M.K Simmons, Designing with Features: Creating and Using Features Database

for Evaluation of Manufacturability of Castings, ASME Comput Rev., 1988, p 285-292

19 J.L Hill and J.T Berry, Geometric Feature Extraction for Knowledge-Based Design of Rigging Systems for

Light Alloys, Modeling of Casting, Welding and Advanced Solidification Processes V, TMS, 1990, p

321-328

20 J.L Hill, J.T Berry, and S Guleyupoglu, Knowledge-Based Design of Rigging Systems for Light Alloy

Castings, AFS Trans., Vol 99, 1992, p 91-96

21 R Gadh and F.B Prinz, Shape Feature Abstraction in Knowledge-Based Analysis of Manufactured

Products, Proc of the Seventh IEEE Conf on AI Applications, Institute of Electrical and Electronics

Engineers, 1991, p 198-204

22 G Upadhya, A.J Paul, and J.L Hill, Optimal Design of Gating and Risering in Castings: An Integrated

Approach Using Empirical Heuristics and Geometric Analysis, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS, 1993, p 135-142

23 C.R Swaminathan and V.R Voller, An "Enthalpy Type" Formulation for the Numerical Modeling of Mold

Filling, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS,

1993, p 365-372

24 D.M Stefanescu and C.S Kanetkar, Computer Modeling of the Solidification of Eutectic Alloys: The Case

of Cast Iron, Computer Simulation of Microstructural Evolution, D.J Srolovitz, Ed., TMS-AIME, 1985, p

171-188

25 D.M Stefanescu, G Upadhya, and D Bandyopadhyay, Heat Transfer-Solidification Kinetics Modeling of

Solidification of Castings, Metall Trans A, Vol 21A, 1990, p 997-1005

Trang 29

26 J.A Spittle and S.G.R Brown, Acta Metall., Vol 37 (No 7), 1989, p 1803-1810

27 E Niyama, T Uchida, M Morikawa, and S Saito, A Method of Shrinkage Prediction and Its Application to

Steel Casting Practice, AFS Cast Metal Res J., Vol 7 (No 3), 1982, p 52-63

28 Y.W Lee, E Chang, and C.F Chieu, Metall Trans., Vol 21B, 1990, p 715-722

29 V.K Suri, G Huang, J.T Berry, and J.L Hill, Applicability of Thermal Parameter Based Porosity Criteria

to Long Freezing Range Aluminum Alloys, AFS Trans., 1992, p 399-407

30 P.N Hansen and P.R Sahm, How to Model and Simulate the Feeding Process in Casting to Predict

Shrinkage and Porosity Formation, Modeling of Casting and Welding Processes IV, A Giamei and G.J

Abbaschian, Ed., TMS, 1988, p 33-42

31 H Huang, V.K Suri, N El-Kaddah, and J.T Berry, Modeling of Casting, Welding and Advanced Solidification Processes VI, T.S Piwonka, Ed., TMS Publications, 1993, p 219-226

32 G.H Geiger and D.R Poirier, Transport Phenomena in Metallurgy, Addison Wesley, 1975, p 332

33 J.F Lancaster, Metallurgy of Welding, Allen and Unwin, London, 1980

34 S Kou and Y Le, Metall Trans A., Vol 14, 1983, p 2245

35 C Chan, J Mazumdar, and M.M Chen, Metall Trans A, Vol 15, 1984, p 2175

36 G.M Oreper and J Szekely, J Fluid Mech., Vol 147, 1984, p 53

37 J Dowden, M Davis, and P Kapadia, J Appl Phys., Vol 57 (No 9), 1985, p 4474

38 A Paul and T DebRoy, Advances in Welding Science and Technology, S.A David, Ed., American Society

for Metals, 1986, p 29

39 M.C Tsai and S Kou, J Numer Methods Phys., Vol 9, 1989, p 1503

40 T Zacharia, S.A David, J.M Vitek, and T DebRoy, Weld J Res Suppl., Vol 68, 1989, p 510s

41 T Zacharia, S.A David, J.M Vitek, and T DebRoy, Metall Trans A, Vol 20, 1989, p 957

42 J Mazumdar and A Kar, Thermomechanical Aspects of Manufacturing and Materials Processing, R.K

Shah, Ed., Hemisphere, 1992, p 283

43 K Mundra and T DebRoy, Metall Trans B, Vol 24, 1993, p 145

44 K Mundra and T DebRoy, Weld J Res Suppl., Vol 72, 1993, p 1s

45 R.T Choo and Z Szekely, Weld J Res Suppl., Vol 73, 1994, p 25s

46 A Matsunawa, International Trends in Welding Science and Technology, S.A David and J Vitek, Ed.,

ASM International, 1993, p 3

47 C.J Geankoplis, Transport Processes and Unit Operations, Allyn and Bacon, 1983, p 186

48 J Szekely, Advances in Welding Science and Technology, S.A David, Ed., ASM International, 1986, p 3

49 R.A Woods and D.R Milner, Weld J Res Suppl., Vol 50, 1971, p 163s

50 Y.H Wang and S Kou, Advances in Welding Science and Technology, S.A David, Ed., ASM International,

1986, p 65

51 A.J Paul and T DebRoy, Metall Trans B., Vol 19, 1988, p 851

52 M.J McNallan and T DebRoy, Metall Trans B., Vol 22, 1991, p 557

53 T DebRoy and S.A David, Physical Processes in Fusion Welding, Rev Mod Phys., Vol 67 (No 1), 1995,

p 85-112

Trang 30

Manufacturing Cost Estimating

David P Hoult and C Lawrence Meador, Massachusetts Institute of Technology

Introduction

COST ESTIMATION is an essential part in the design, development, and use of products In the development and design

of a manufactured product, phases include concept assessment, demonstrations of key features, and detailed design and production The next phase is the operation and maintenance of the product, and finally, its disposal Cost estimation arises in each of these phases, but the cost impacts are greater in the development and design phases

Anecdotes, experience, and some data (Ref 1, 2) support the common observation that by the time 20% of a product is specified, 70 to 80% of the costs are committed, even if those costs are unknown! Another common perception is that the cost of correcting a design error (cost overrun) rises very steeply as product development proceeds through its phases What might cost one unit of effort to fix in the concept assessment phase might cost a thousand units in detailed engineering These experiences of engineers and designers force the cost estimator to think carefully about the context, timing, and use of cost estimates

Manufacturing Cost Estimating

David P Hoult and C Lawrence Meador, Massachusetts Institute of Technology

General Concepts

In this article, the focus is on products defined by dimensions and tolerances, made from solid materials and, fabricated

by some manufacturing process Two issues should be apparent: first, accurate cost estimates of a product in its early stages of design is difficult; second, there are a very large number of manufacturing processes, so one must somehow restrict the discussion to achieve sensible results

In dealing with the first issue a series of cost estimates corresponding to the phases in the product-development program should be considered As more details of the product are specified, the cost estimates should become more accurate Thinking this way, it is plausible that different tools for cost estimation may be employed in different phases of a program In this review examples are given of methods of cost estimation that may be used in the contexts of the different phases of a development program

Domain Limitation. The second issue gives rise to the important principle of domain limitations (Ref 3, 4), meaning

data that form the basis for a cost estimate must be specific to the manufacturing process, the materials used, and so on Cost estimates only apply within specific domains This leads directly to another difficulty: most products, even if they are only moderately complex, like a dishwasher with 200 unique parts, have a least three or five domains in which cost estimates apply (i.e., sheet metal assembly, injection-molded plastic parts, formed sheet metal parts, etc.) More complex products, such as an aircraft jet engine with 25,000 unique parts, might have 200 different manufacturing processes, each

of which defines one or more domains of cost estimation Clearly, as one considers still more complex products, such as a modern military radar system with 10,000 to 20,000 unique parts, or a large commercial airliner with perhaps 5 million

Trang 31

unique parts, the domains of cost estimation expand dramatically So, although domain limitation is necessary for estimates accuracy, it is not a panacea

cost-Database Commonality. Estimating the costs of a complex product through various phases of development and production requires organization of large amounts of data If the data for design, manufacturing, and cost are linked, there

is database commonality It has been found (Ref 3) that having database commonality results in dramatic reductions in

cost and schedule overruns in military programs In the same study, domain limitation was found to be essential in achieving database commonality

Having database commonality with domain limitation implies that the links between the design and specific manufacturing processes, with their associated costs, are understood and delineated Focusing on specific manufacturing processes allows one to collect and organize data on where and how costs arise in specific processes With this focus, the accuracy of cost estimates can be determined, provided that uniform methods of estimation are used, and provided that, over time, the cost estimates are compared with the actual costs as they arise in production In this manner, the accuracy

of complex cost estimates may be established and improved

In present engineering and design practice, many organizations do not have adequate database commonality, and the accuracy of cost estimates is not well known Database commonality requires an enterprise-wide description of cost-dominant manufacturing processes, a way of tracking actual costs for each part, and a way of giving this information in

an appropriate format to designers and cost estimators Most "empirical methods" of cost estimation, which are based on industrywide studies of statistical correlation of cost, may or may not apply to the experience of a specific firm (see the discussion in the sections that follow)

Costs are "rolled up" for a product when all elements of the cost of a product are accounted for Criteria for cost estimation using database commonality is simple: speed (how long does it take to roll up a cost estimate on a new design), accuracy (what is the standard deviation of the estimate, based on comparison with actual costs) and risk (what is the probability distribution of the cost estimate; what fraction of the time will the estimate be more than 30% too low, for example) One excellent indicator of database commonality is the roll-up time criteria World-class cost-estimation roll-

up times are minutes to fractions of days Organizations that have such rapid roll-up times have significantly less cost and schedule overruns on military projects (Ref 3)

Cost allocation is another general issue Cost allocation refers to the process by which the components of a design are assigned target costs The need for cost allocation is clear: how else would an engineer, working on a large project, know how much the part being designed should cost? And, if the cost is unknown and the target cost is not met, there will be time delays, and hence costs incurred due to unnecessary design iteration It is generally recognized that having integrated product teams (IPTs) is a good industrial practice Integrated product teams should allocate costs at the earliest stages of a development program Cost estimates should be performed concurrently with the design effort throughout the development process Clearly, estimating costs at early stages in a development program, for example, when the concept

of the product is being assessed, requires quite different tools than when most or all the details of the design are specified Various tools that can be used to estimate cost at different stages of the development process are described later in this section

Elements of Cost. There are many elements of cost The simplest to understand is the cost of material For example, if

a part is made of aluminum and is fabricated from 10 lb of the material, if the grade of aluminum costs $2/lb, the material cost is $20 The estimate gets only a bit more complex if, as in the case of some aerospace components, some 90% of the materials will be machined away; then the sale on scrap material is deducted from the material cost

Tooling and fixtures are the next easiest items to understand If tools are used for only one product, and the lifetime of the tool is known or can be estimated, then only the design and fabrication cost of the tool is needed Estimates of the fabrication costs for tooling are of the same form as those for the fabricated parts The design cost estimate raises a difficult and general problem: cost capture (Ref 4) For example, tooling design costs are often classified as overhead, even though the cost of tools relates to design features In many accounting systems, manufacturing costs are assigned

"standard values," and variances from the standard values are tabulated This accounting methodology does not, in general, allow the cost engineer to determine the actual costs of various design features of a part In the ledger entries of many accounting systems, there is no allocation of costs to specific activities or no activity-based accounting (ABC) (Ref 5) In such cases there are no data to support design cost estimates

Trang 32

Direct labor for products or parts that have a high yield in manufacturing normally have straightforward cost estimates, based on statistical correlation to direct labor for past parts of a similar kind However, for parts that have a large amount

of rework the consideration is more complex, and the issues of cost capture and the lack of ABC arise again Rework may

be an indication of uncontrolled variation of the manufacturing process The problem is that rework and its supervision may be classified all, or in part, as manufacturing overhead For these reasons, the true cost of rework may not be well known, and so the data to support cost estimates for rework may be lacking

The cost estimates of those parts of overheads that are associated with the design and production of a product are particularly difficult to estimate, due to the lack of ABC and the problem of cost capture For products built in large volumes, of simple or moderate complexity, cost estimates of overheads are commonly done in the simplest possible way: the duration of the project and the level of effort are used to estimate the overhead This practice does not lead to major errors because the overhead is a small fraction of the unit cost of the product

For highly engineered, complex products built in low volume, cost estimation is very difficult In such cases the problem

of cost capture is also very serious (Ref 4)

Machining costs are normally related to the machine time required and a capital asset model for the machine, including depreciation, training, and maintenance With a capital asset model, the focus of the cost estimate is the time to manufacture A similar discussion holds for assembly costs: with a suitable capital asset model, the focus of the cost estimate is the time to assemble the product (Ref 1)

Methods of Cost Estimations. There are three methods of cost estimation discussed in the following sections of this article The first is parametric cost estimation Starting from the simplest description of the product, an estimate of its overall cost is developed One might think that such estimates would be hopelessly inaccurate because so little is specified

about the product, but this is not the case The key to this method is a careful limitation of the domain of the estimate (see

the previous section) This example deals with the estimate of the weight of an aircraft The cost of the aircraft would then

be calculated using dollars/pound typical of the aircraft type Parametric cost estimation is the generally accepted method

of cost estimation in the concept assessment phases of a development program The accuracy is surprisingly good about 30% (provided that recent product-design evolution has not been extensive)

The second method of cost estimation is empirically based: one identifies specific design features and then uses statistical correlation of costs of past designs to estimate the cost of the new design This empirical method is by far the most common in use For the empirical method to work well, the features of the product for which the estimate is made should

be unambiguously related to features of prior designs, and the costs of prior designs unambiguously related to design features Common practice is to account for only the major features of a design and to ignore details Empirical methods are very useful in generating a rough ranking of the costs of different designs and are commonly used for that purpose (Ref 1, 6, 7) However, there are deficiencies inherent in the empirical methods commonly used

The mapping of design features to manufacturing processes to costs is not one-to-one Rather, the same design feature may be made in many different ways This difficulty, the feature mapping problem, discussed in Ref 4, limits the accuracy of empirical methods and makes the assessment of risk very difficult The problem is implicit in all empirical methods The problem is that the data upon which the cost correlation is based may assume the use of manufacturing methods to generate the features of the design that do not apply to the new design It is extraordinarily difficult to determine the implicit assumptions made about manufacturing processes used in a prior empirical correlation A commonly stated accuracy goal of empirical cost estimates is 15 to 25%, but there is very little data published on the actual accuracy of the cost estimate when it is applied to new data

The final method discussed in this article is based on the recent development called complexity theory A mathematically rigorous definition of complexity in design has been formulated (Ref 8) In brief, complexity theory offers some improvement over traditional empirical methods: there is a rational way to assess the risk in a design, and there are ways

of making the feature mapping explicit rather than implicit Perhaps the most significant improvement is the capability to capture the cost impact of essentially all the design detail in a cost estimate This allows designers and cost estimators to explore, in a new way, methods to achieve cost savings in complex parts and assemblies

References cited in this section

1 G Boothroyd, P Dewhurst, and W Knight, Product Design for Manufacture and Assembly, Marcel Dekker,

Trang 33

5 H.T Johnson and R.S Kaplan, Relevance Lost, the Rise and Fall of Management Accounting, Harvard

Business School Press, 1991

6 G Boothroyd, Assembly Automation and Product Design, Marcel Dekker, 1992

7 P.F Ostwald, "American Machinist Cost Estimator," Penton Educational Division, Penton Publishing, 1988

8 D.P Hoult and C.L Meador, "Predicting Product Manufacturing Costs from Design Attributes: A Complexity Theory Approach," No 960003, Society of Automotive Engineers, 1996

Manufacturing Cost Estimating

David P Hoult and C Lawrence Meador, Massachusetts Institute of Technology

Parametric Methods

An example for illustrating parametric cost estimation is that of aircraft In Ref 9, Roskam a widely recognized researcher in this field describes a method to determine the size (weight) of an aircraft Such a calculation is typical of parametric methods To determine cost from weight, one would typically correlate costs (inflation adjusted) of past aircraft of similar complexity with their weight Thus weight is surrogate for cost for a given level of complexity

Most parametric methods are based on such surrogates For another simple example, consider that large coal-fired power plants, based on a steam cycle, cost about $1500/kW to be built So, if the year the plant is to be built (for inflation adjustment) and its kW output is known, parametric cost estimate can be readily obtained

Parametric cost estimates have the advantage that little needs to be known about the product to produce the estimate Thus, parametric methods are often the only ones available in the initial (concept assessment) stages of product development

The first step in a parametric cost estimation is to limit the domain of application Roskam correlates statistical data for a dozen types of aircraft and fifteen sub types The example he uses to explain the method is that of a twin-engine, propeller-driven airplane The mission profile of this machine is given in Fig 1 (Ref 9)

Trang 34

Fig 1 Mission profile

Inspection of the mission specifications and Fig 1 shows that only a modest amount of information about the airplane is given In particular, nothing is specified about the detailed design of the machine! The task is to estimate the total weight,

WTO or the empty weight, WE, of the airplane Roskam argues that the total weight is equal to the sum of the empty

weight, fuel weight, WF, payload and crew weight, WPL + Wcrew, and the trapped fuel and oil, which is modeled as a

fraction, Mtfo, to the total weight Mtfo is to be a small (constant) number, typically 0.001 to 0.005 Thus the fundamental equation for aircraft weight is:

WTO = WE + WF + WPL + Wcrew + MtfoWTO (Eq 1)

The basic idea of Roskam is that there is an empirical relationship between aircraft empty and total weights, which he finds to be:

The coefficients, A and B, depend on which of the dozen types and fifteen subtypes of aircraft fit the description in Table

1 and Fig 1 It is at this point that the principle of domain limitation first enters For the example used by Roskam, the

correlation used to determine A = 0.0966 and B = 1.0298 for the twin-engine, propeller-driven aircraft spans a range of

empty weights from 1000 to 7000 lb

Table 1 Mission specification for a twin-engine, propeller-driven airplane

1 Payload Six passengers at 175 lb each (including the pilot) and 200 lb total baggage

2 Range 1000 statute miles with maximum payload

3 Reserves 25% of mission fuel

4 Cruise speed 250 knots at 75% power at 10,000 ft and at takeoff weight

5 Climb 10 min to 10,000 ft at takeoff weight

6 Takeoff and landing 1500 ft ground fun at sea level, standard day Landing at 0.95 of takeoff weight

7 Powerplants Piston/propeller

8 Certification base FAR23

The method proceeds as follows to determine the weight of fuel required in the following way The mission fuel, WF, can

be broken down into the weight of the fuel used and the reserve fuel:

Roskam models the reserve fuel as a fraction of the fuel used (see Table 1) The fuel used is modeled as a fraction of the total weight, and depends on the phase of the mission, as described in Fig 1 For mission phases that are not fuel intensive, a fixed ratio of the weight at the end of the phase to that at the beginning of the phase is given Again, these ratios are specific to the type of aircraft For fuel-intensive phases, in this example the cruise phase, there is a relationship between the lift/drag ratio of the aircraft, the engine fuel efficiency, and the propeller efficiency Again, these three parameters are specific to the type of aircraft

When the fuel fraction of the total weight is determined by either a cruise calculation, or by the ratio of weight at the end

of a mission phase to the beginning of a mission phase, the empty weight can be written in terms of the total weight Then

Eq 2 is used to find the total weight of the aircraft

For the problem posed, Roskam obtains an estimated total weight of 7900 lb The accuracy can be estimated from the

scatter in the correlation used to determine the coefficients A and B, and is about 30% For details of the method Roskam

uses for obtaining the solution, refer to Ref 9

Trang 35

Some limitations of the parametric estimating method are of general interest For example, if the proposed aircraft does not fit any of the domains of the estimating model, the approach is of little use Such an example might be the V-22, a tilt wing aircraft (Ref 10), which flies like a fixed-wing machine, but tilts its wings and propellers, allowing the craft to hover like a helicopter during take-off and landing Such a machine might be considered outside the domain of Roskam's estimating model The point is not that the model is inadequate (the V-22 is more recent than Roskam's 1986 article), but the limited product knowledge in the early stages of development makes it difficult to determine if a cost estimate for the V-22 fits in a well-established domain

Conversely, even complex machines, such as aircraft, are amenable to parametric cost estimates with fairly good accuracy, provided they are within the domain of the cost model In the same article, Roskam presents data for transport jets, such as those used by airlines It should be emphasized that the weight (and hence cost) of such machines, with more than one million unique parts, can be roughly estimated by parametric methods

Of course, cost is not the same as weight or, for that matter, any other engineering parameter The details of the manufacturing process, inventory control, design change management, and so forth, all play a role in the relationship between weight and cost The more complex the machine, the more difficult it is to understand if the domain of the parametric cost-estimating model is the same as that of the product being estimated

References cited in this section

9 J Roskam, Rapid Sizing Method for Airplanes, J Aircraft, Vol 23 (No.7), July 1986, p 554-560

10 The Bell-Boeing V-22 Osprey entered Low Rate Initial Production with the MV-22 contract signed June 7,

1996, Tiltrotor Times, Vol 1 (No 5), Aug 1996

Manufacturing Cost Estimating

David P Hoult and C Lawrence Meador, Massachusetts Institute of Technology

Empirical Methods of Cost Estimation

Almost all the cost-estimating methods published in the literature are based on correlation of some feature or property of the part to be manufactured Two examples are presented The first is from the book by Boothroyd, Dewhurst, and Knight (Ref 1), hereafter referred to as BDK Chapter 9 of this book is devoted to "Design for Sheet Metalworking." The first part of this chapter is devoted to estimates of the costs of the dies used for sheet metal fabrication This example was chosen because the work of these authors is well recognized (Boothroyd and Dewhurst Inc sells widely used software for design for manufacture and design assembly.) In this chapter of the book, the concept of "complexity" of stamped sheet metal parts arises The complexity of mechanical parts is discussed in the section "Complexity Theory" in this article

Example 1: Cost Estimates for Sheet Metal Parts

Sheet metal comes in some 15 standard gages, ranging in thickness from 0.38 to 5.08 mm It is commonly available in steel, aluminum, copper, and titanium Typical prices for these materials are 0.80-0.90$/lb for low-carbon steel, $6.00-

$7.00/lb for stainless steel, $3.00/lb for aluminum, $10.00/lb for copper, and $20.00/lb for titanium It is typically shipped

in large coils or large sheets

Automobiles and appliances use large amounts of steel sheet metal Aluminum sheet metal is used in commercial aircraft manufacture, but in lesser amounts due to the smaller number of units produced

Sheet metal is fabricated by shearing and forming operations, carried out by dies mounted in presses Presses have beds, which range in size from 50 by 30 cm to 210 by 140 cm (20 by 12 in to 82 by 55 in.) The press force ranges from 200 to

4500 kN (45 to 1000 lbf) The speed ranges from 100 strokes/min to 15 strokes/min in larger sizes

Trang 36

Dies typically have four components: a basic die set; a punch, held by the die set, which shears or forms the metal; a die plate through which or on which the punch acts; and a stripper plate, which removes the scrap at the end of the fabrication process

BDK estimate the basic die set cost (Cds, in U.S dollars) as basically scaling with usable area (Au, in cm2):

The assessment of how part complexity affects cost arises repeatedly in cost estimating The subject is discussed at length

in the next section "Complexity Theory" From the data of BDK, the basic time to manufacture the die set (M, in hours) can be estimated by the following steps: Define the basic manufacturing points (Mpo) as

Note that the manufacturing time increases a bit less than linearly with part complexity This is consistent with the section

"Complexity Theory" BDK goes on to add two correction factors to Mpo The first is a correction factor due to plate size

and part complexity, fLW From BDK data it is found:

The second correction factor is to account for the die plate thickness BDK cites Nordquist (Ref 11), who gives a

recommended die thickness, hd, as:

where U is the ultimate tensile stress of the sheet metal, Ums is the ultimate stress of mild steel, a reference value, V, is the required production volume, and h is the thickness (in mm) of the metal to be stamped BDK recommends the second

correction factor to be:

Trang 37

The die cost risk (i.e., uncertainty of the resulting estimate of die cost) is unknown, because it is not known how the model equations would change with different manufacturing processes or different die design methods

It is worth noting carefully that only some features of the design of the part enter the cost estimate: the length and width

of the punch area, the perimeter of the part to be made, the material, and the production volume Thus, the product and die designers do not need to be complete in all details to make a cost estimate Hence, the estimate can be made earlier in the product-development process Cost trades between different designs can be made at an early stage in the product-development cycle with empirical methods

Example 2: Assembly Estimate for Riveted Parts

The American Machinist Cost Estimator (Ref 7) is a very widely used tool for empirical cost estimation It contains data

on 126 different manufacturing processes A spreadsheet format is used throughout for the cost analysis One example is

an assembly process It is proposed to rivet the aluminum frame used on a powerboat The members of the frame are made from 16-gage aluminum The buttonhead rivets, which are sized according to recommendations in Ref 12, are

in in diameter and conform to ANSI standards Figure 2 shows the part

Fig 2 Powerboat frame assembly

There are 20 rivets in the assembly, five large members of the frame, and five small brackets Chapter 21 in Ref 7 includes six tables for setup, handling, pressing in the rivets, and riveting A simple spreadsheet (for the first unit) might look like Table 2 The pieces are placed in a frame, the rivets are inserted, and riveted The total cycle time for the first unit is 18.6 min There are several points to mention here First, the thickness of the material and the size of the rivets play

no direct part in this simple calculation The methods of Ref 7 do not include such details

Table 2 Spreadsheet example for assembly of frame (Fig 2)

Source(a) Process

21.2-1 Get 5 frame members from skid 1.05

21.2-1 Get 5 brackets from bench 0.21

21.2-2 Press in hardware (20 rivets) 1.41

21.2-3 Set 20 rivets 0.93

Total cycle time (minutes) 3.60 15

(a) Tables in Ref 7, Chapter 21

Yet common sense suggests that some of the details must count For example, if the rivet holes are sized to have a very small clearance, then the "press-in-hardware" task, where the rivets are placed in the rivet holes, would increase In a like manner, if the rivets fit looser in the rivet holes, the cycle time for this task might decrease The point of this elementary discussion is that there is some implied tolerance with each of the steps in the assembly process

Trang 38

In fact, one can deduce the tolerance from the standard specification of the rivets From Ref 12, in the tolerance on in diameter buttonhead rivets is 0.010 in So the tolerance of the hole would be about the same size

The second point is that there are 30 parts in this assembly How the parts are stored and how they are placed in the riveting jig or fixture determines how fast the process is done With experience, the process gets faster There is a well-understood empirical model for process learning The observation, often repeated in many different industries, is that

inputs decrease by a fixed percentage each time the number of units produced doubles So, for example, L i is the labor in

minutes of the ith unit produced, and L0 is the labor of the first unit, then:

The parameter measures the slope of the learning curve The learning curve effects were first observed and documented

in the aircraft industry, where a typical rate of improvement might be 20% between doubled quantities This establishes

an 80% learning function, that is, = 0.80 Because this example is fabricated from aluminum, with rivets typical of aircraft construction, it is easy to work out that the 32nd unit will require 32.7% of the time (6.1 min) compared to the first unit (18.6 min)

Learning occurs in any well-managed manual assembly process With automated assembly, "learning" occurs only when improvements are made to the robot used In either case, there is evidence that, over substantial production runs and considerable periods of time, the improvement is a fixed percentage between doubled quantities That is, if there is a 20% improvement between the tenth and twentieth unit, there will likewise be a 20% improvement between the hundredth and two hundredth unit

The cost engineer should remember that, according to this rule, the percentage improvement from one unit to the next is a steeply falling function After all, at the hundredth unit, it takes another hundred units to achieve the same improvement

as arose between the 10th and 20th units (Ref 13)

References cited in this section

1 G Boothroyd, P Dewhurst, and W Knight, Product Design for Manufacture and Assembly, Marcel

Dekker, 1994, Chapt 1

7 P.F Ostwald, "American Machinist Cost Estimator," Penton Educational Division, Penton Publishing, 1988

11 W.N Nordquist, Die Designing and Estimating, 4th ed., Huebner Publishing, 1955

12 E Oberg, F.D Jones, and H.L Horton, Machinery's Handbook, 22nd ed., Industrial Press, 1987, p

1188-1205

13 G.J Thuesen, and W.J Fabrycky, Engineering Economy, Prentice Hall, 1989, p 472-474

Manufacturing Cost Estimating

David P Hoult and C Lawrence Meador, Massachusetts Institute of Technology

Complexity Theory

Up to now this article has dealt with the cost-estimation tools that do not require a complete description of the part or assembly to make the desired estimates What can be said if the design is fully detailed? Of course, one could build a prototype to get an idea of the costs, and this is often done, particularly if there is little experience with the manufacturing methods to be used For example, suppose there is a complex wave feed guide to be fabricated out of aluminum for a modern radar system The part has some 600 dimensions One could get a cost estimate by programming a numerically controlled milling machine to make the part, but is there a simpler way to get a statistically meaningful estimate of cost, while incorporating all of the design details? The method that fulfills this task is complexity theory

Trang 39

There has been a long search for the "best" metric to measure how complex a given part or assembly is The idea of using dimensions and tolerances as a metric comes from Wilson (Ref 14) The idea presented here is that the metric is a sum of

log (d i /t i ), where d i is the ith dimension and t i is its associated tolerance (i ranges over all the dimensions needed to describe the part) According to complexity theory, how complex a part is, I, is measured by:

(Eq 12)

Originally, the log function was chosen from an imperfect analogy with information theory It is now understood that the log function arises from a limit process in which tolerance goes to zero while a given dimension remains fixed In this limit, if good engineering practice is followed, that is, if the accuracy of the machine making the part is not greatly different than the accuracy required of the part, and if the "machine" can be modeled like a first-order damped system, then it can be shown that the log function is the correct metric Because of historical reasons, the log is taken to the base

2, and I is measured in bits Thus Eq 12a is written:

(Eq 12a)

There are two main attractions of the complexity theory First, I will include all of the dimensions required to describe the

part Hence, the metric captures all of the information of the original design For assemblies, the dimensions and tolerances refer to the placement of each part in the assembly, and second, the capability of making rigorous statements of

how I effects costs In Ref 8 it is proven that if the part is made by a single manufacturing process, the average time (T) to

fabricate the part is:

Again, in many cases, the coefficient A must be determined empirically from past manufacturing data The same formula

applies to assemblies made with a single process, such as manual labor The extension to multiple processes is given in Ref 8

A final aspect of complexity theory worth mentioning is risk Suppose a part with hundreds of dimensions is to be made

on a milling machine The exact sequence in which each feature of the part is cut out will determine the manufacturing

time But there are a large number of such sequences, each corresponding to some value of A Hence there is a collection

of As, which have a mean that corresponds to the average time to fabricate the part That is the meaning of Eq 13

It can be shown that the standard deviation of manufacturing time is:

where T is the standard deviation of the manufacturing time, and A is the standard deviation of the coefficient A A

can be determined from past data These results have a simple interpretation: Parts or assemblies with tighter (smaller)

tolerances take longer to make or assemble because with dimensions fixed, the log functions increase as the tolerances decrease More complex parts, larger I, take longer to make (Eq 13), and more complex parts have more cost risk (Eq 14)

These trends are well known to experienced engineers

In Eq 8, a large number of parts from three types of manufacturing processes were correlated according to Eq 13 The results of the following manual lathe process are typical of all the processes studied in Eq 8 Figure 3 shows the

correlation of time with I, the dimension information, measured in bits An interesting fact, shown in Fig 4 is that the

accuracy of the estimate is no different than that of an experienced estimator

Trang 40

Fig 3 Manufacturing time and dimension information for the lathe process (batch size 3 to 6 units)

Fig 4 Accuracy comparison for the lathe process

In Eq 13, the coefficient, A, is shown to depend on the machine properties such as speed, operation range, and time to

reach steady-state speed Can one estimate their value from first principals? It turns out that for manual processes one can make rough estimates of the coefficient

The idea is based on the basic properties of human performance, known as Fitts' law Fitts and Posner reported the maximum human information capacity for discrete, one-dimensional positioning tasks at about 12 bits/s (Ref 15) Other experiments have reported from 8 to 15 bits/s for assembly tasks (Ref 16)

The rivet insertion process discussed previously in this article is an example The tolerance of the holes for the rivets is estimated to be 0.010 in., that is, the same as the handbook value of the tolerance of the barrel of the rivet (Ref 12) Then

it is found that d/t 0.312/0.010 = 31.2 and log2 = 4.815 bits for each insertion The initial rate of insertion (Ref 7) was

Ngày đăng: 10/08/2014, 13:20

TỪ KHÓA LIÊN QUAN