The broad range of topics describe how advancements in digital computer hardware and software have opened up new opportunities in mechanical testing, modeling of physical processes, data
Trang 2STP 1231
Automation in Fatigue
and Fracture: Testing
and Analysis
Claude Amzallag, Editor
ASTM Publication Code Number (PCN):
04-012310-30
ASTM
1916 Race Street Philadelphia, PA 19103 Printed in the U.S.A
Trang 3Library of Congress Cataloging-in-Publication Data
Automation in fatigue and fracture: testing and analysis / Claude
Amzallag, editor
(STP: 1231)
"ASTM publication code number (PCN) 04-012310-30."
Includes bibliographical references and index
Photocopy Rights
Authorization to photocopy items for internal or personal use, or the interna~ or personal use
of specific clients, is granted by the AMERICAN SOCIETY FOR TESTING AND MATERIALS for users registered with the Copyright Clearance Center (CCC) Transactional Reporting Service, provided that the base fee of $2.50 per copy, plus $0.50 per page is paid directly to CCC, 222 Rosewood Dr., Danvers, MA 01923; Phone: (508) 750-8400; Fax: (508) 750-4744 For those organizations that have been granted a photocopy license by CCC, a separate system of payment has been arranged The fee code for users of the Transactional Reporting Service is 0-8031-1985-2/94 $2.50 + 50
Peer Review Policy
Each paper published in this volume was evaluated by three peer reviewers The authors addressed all of the reviewers' comments to the satisfaction of both the technical editor(s) and the ASTM Committee on Publications
The quality of the papers in this publication reflects not only the obvious efforts of the authors and the technical editor(s), but also the work of these peer reviewers The ASTM Committee
on Publications acknowledges with appreciation their dedication and contribution to time and effort on behalf of ASTM
Printed in Fredericksburg, VA December 1994
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 4Foreword
The International Symposium on Automation in Fatigue and Fracture: Testing and Analysis,
was held 15-17 June 1992 in Paris, France It was cosponsored by the: Societe Francaise de
Metallurgie et de Materiaux (SF2M), Committee on Fatigue, France; and American Society
for Testing and Materials (ASTM), Committee E9 on Fatigue, USA
Also offering valuable cooperation were the: Society of Automotive Engineers (SAE);
Fatigue Design and Evaluation Committee, USA; Engineering Integrity Society (EIS), UK;
and National Research Institute for Metals (NRIM), Japan
The Symposium was an extension of the series of International Spring Meetings of SF2M
This publication is a result of this symposium Claude Amzallag, IRSID-Unieux, France, is
the editor
Acknowledgment
The Organizing Committee, who helped develop the program and provide session chairmen
and reviewers, are acknowledged for their assistance Ms Gail Leese, (PACCAR Technical
Center, USA) and Dr Dale Wilson (Tennessee Technical University, USA) helped shape the
symposium, provide reviewers, and graciously offered their time in reviewing papers
In addition to the help of the technologists cited above, the editor wishes to express gratitude
to the staff members of SF2M and ASTM, particularly Yves Franchot, SF2M, who handled
the administration of the symposium
Trang 5A Sampling of Mechanical Test A u t o m a t i o n Methodologies Used in a Basic
Research Laboratory -G A HARTMAN, N E A S H B A U G H , A N D D J B U C H A N A N 36
C o m p u t e r Applications in Full-Scale A i r c r a f t Fatigue Tests -R L HEWITT AND
Microprocessor-Based Controller for A c t u a t o r s in S t r u c t u r a l Testing R SUNDER
A n A u t o m a t e d I m a g e Processing System for the M e a s u r e m e n t of S h o r t Fatigue
C r a c k s a t Room and Elevated T e m p e r a t u r e s - - L Yl, R A, SMITH,
C o m p u t e r - A i d e d L a s e r I n t e r f e r o m e t r y for F r a c t u r e Testing A K MAJI AND
A u t o m a t e d D a t a Acquisition and Data B a n k Storage of Mechanical Test Data:
A n I n t e g r a t e d Approach -G BRACKE, J BRESSERS, M STEEN, AND H H OVER 108 Sampling Rate Effects in A u t o m a t e d Fatigue C r a c k G r o w t h R a t e T e s t i n g - -
P r o c e d u r e for A u t o m a t e d Tests of Fatigue C r a c k P r o p a g a t i o n - - v BACHMANN,
A u t o m a t i o n of Fatigue C r a c k G r o w t h Data Acquisition for C o n t a i n e d and
Through-Thickness C r a c k s Using E d d y - C u r r e n t and Potential
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 6A Computer-Aided Technique for the Determination of R-Curves from
Center-Cracked Panels of Nonstandard Proportions G R SUTTON,
FATIGUE UNDER VARIABLE AMPLITUDE LOADING The Significance of Variable Amplitude Fatigue Testing D SCH~3TZ
Spectrum Fatigue Life Assessment of Notched Specimens Using a Fracture
Mechanics Based Approach M V O R M W A L D , P HEULER, AND C KRAE 221 Spectrum Fatigue Testing Using Dedicated Software c MARQUIS AND J SOLIN 241
A Computerized Variable Amplitude Fatigue Crack Growth Rate Test Control
Automated Fatigue Test System for Spectrum Loading Simulation of
High-Cycle Fatigue of Austenitic (316L) and Ferritic (A508) Steels Under
Gaussian Random LoadingwJ.-P GAUTHIER, C A M Z A L L A G , J.-A LE DUFF,
Crack Closure Measurements and Analysis of Fatigue Crack Propagation
Under Variable Amplitude Loading c AMZALLAG, J.-A LE DUFF,
A Fatigue Crack Propagation Model Under Variable Loading J GERALD AND
Sensitivity of Equivalent Load Crack Propagation Life Assessment
to Cycle-Counting Technique E LE PAUTREMAT, M OLAGNON,
FATIGUE AND F R A C T U R E A N A L Y S I S AND S I M U L A T I O N
Fatigue Life Prediction Under Periodical or Random Muitiaxial Stress States
Nenber-Based Life Prediction Procedure for Mnltiaxially Loaded Components
Fatigue Test Methods and Damage Models Used by the SNCF for Railway
Load Simulation Test System for Agricultural Tractors K NISHIZAKI 419 Applying Contemporary Life Assessment Techniques to the Evaluation of
Urban Bus Structures M M DE FREITAS, N M MAIA, J MONTALVAO E SILVA,
Trang 7Fatigue and Fracture Analysis of Type 316L Thin-Walled Piping for
Heavy Water Reactors: Crack Growth Prediction Over 60 Years
(With and Without Stratification) and Flawed Pipe Testing A B POOLE
A Rule-Based System for Estimating High-Temperature Fatigue L i f e - -
7010 Alloys Subjected to Aeronautical Spectra -c B L E U Z E N ,
Using Maximum Likelihood Techniques in Evaluating Fatigue Crack Growth
Curves -s E C U N N I N G H A M AND C G ANNIS, JR 531 Advances in Hysteresis Loop Analysis and Interpretation by Low-Cycle
Fatigue Test Computerization -G DEGALLA1X, P HOTTEBART, A SEDDOUKI,
An Automatic Ultrasonic Fatigue Testing System for Studying Low Crack
Growth at Room and High Temperatures -T wu, J NI, AND C BATHIAS 598 Database for Aluminum Fatigue DesigneD KOSTEAS, R ONDRA, AND
Material Data Banks: Design and Use, an Example in the Automotive
Hypertext and Expert Systems Application in Fatigue Assessment and Advice
A Software System for the Enhancement of Laboratory Calculations A GALTIER 648
Trang 8Overview
STP1231-EB/Dec 1994
In the diverse and complex technology of fatigue and fracture, it is increasingly important for societies and engirieers to exchange information of mutual interest It is thus critical to provide forums, such as the subject symposium, to allow for open exchange With knowledge
of the needs of industry, researchers gain insight valuable in assuring their focus is on meaningful topics Armed with the latest developments from the research community, engineers, in turn, are able to apply and validate these concepts and findings from the research community The goal of the Symposium on Automation and Fatigue and Fracture: Testing and Analysis,
was to be just such a forum on an international scale Developers of testing methodology, researchers and scientists who evaluate and predict materials response, and engineers who apply the results to current day challenges in industry joined together to reflect on recent achievements in the areas of:
1 Automated testing systems and methods,
2 Models and methods for predicting fatigue life under complex loading,
3 Fatigue and fracture analysis and simulation, and
4 Applications and prediction methods
This collaboration resulted in the presentation of 45 papers to an audience of around 150 technologists, representing more than 18 countries and 5 continents The broad range of topics describe how advancements in digital computer hardware and software have opened up new opportunities in mechanical testing, modeling of physical processes, data analysis and interpre- tation, and, finally, applications in engineering environments
This volume is offered as a valuable source of information for all those interested in deepening their understanding of fatigue and fracture phenomena It is the hope of all involved that this may spawn yet further ideas and innovations in applying multidisciplinary technologies
to testing and analysis automation, which in turn may open new doors of understanding
C Amzallag IRSID-Unieux, France;
symposium chairman and editor
Trang 9Automated Testing Systems and
Trang 10A r t h u r A B r a u n t
A Historical Overview and Discussion of
Computer-Aided Materials Testing
REFERENCE: Braun, A A., "A Historical Overview and Discussion of Computer-Aided
Materials Testing," Automation in Fatigue and Fracture: Testing and Analysis, ASTM STP
pp 5-17
ABSTRACT: Consistency of test data has always been a key concern in any materials testing
application Test technique or method, operator skill and experience, and capabilities of the
apparatus are all parameters that affect the consistency of the desired information The arrival
of testing automation has contributed significantly to improving the consistency of materials
testing apparatus, modifying existing test methods, creating new test methods due to enhanced
capability, and improving the productivity of testing systems
This paper surveys the development of computer-aided testing over the last 20 to 25 years
and includes a discussion of current systems implementations and the emerging area of labora-
tory-wide automation The rapid development of materials testing automation capability has
generally tracked the trends in the computer industry Advances in microprocessor hardware
technology have driven testing automation by allowing for embedded intelligence in key test
system components and by allowing for high-performance supervisory computer subsystems
to control or supervise the overall test rig Software technology advances in concert with
expanding hardware capability have provided truly useful real-time operating environments,
more efficient applications development tools, and higher productivity through more intuitive
user interface technology All together, these technology improvements have allowed for more
sophisticated, consistent, and higher performance testing automation Further improvements
will be realized through the true utilization of the emerging digitally based systems architectures
and emerging networking technology This discussion concludes with a brief look at where
emerging capabilities such as these will allow for new types of experiments to be performed
and where information management will be enhanced, thus allowing for greater productivity
in the test laboratory
KEY WORDS: materials testing, test automation, controls, data acquisition, historical survey,
fatigue (materials), fracture (materials), data analysis, testing methods
This paper describes the historical development of automation applied to fatigue and fracture
testing Automation capability for servohydraulic mechanical testing systems appeared in the
late 1960s with the advent of lower-cost minicomputer capability and software options that
allowed for the demanding real-time requirements of fatigue and fracture tests to be addressed
As computer hardware and software improved, gains in increased test control and data acquisi-
tion performance as well as options to use the automation facility for new types of tests
emerged This evolution occurred in several phases, which will be discussed here
The first phase of early implementations was concerned primarily with interfacing lower-
cost minicomputers with the system analog controls for data acquisition and program generation
Group manager, Applications Engineering, Aerospace Structures and Materials Testing, MTS Systems
Corporation, Eden Prairie, MN 55344
Trang 116 AUTOMATION IN FATIGUE AND FRACTURE
(drive signal or function generation) This allowed for greater efficiency and some of the first
generation of calculated variable control tests (tests in which the computer was used to control
a secondary or indirectly calculated parameter such as true strain or stress intensity range by
adjusting the primary control parameter, such as force or strain, during the course of the test)
The second phase was really a transition Prior to the transition, the minicomputer solution
was optimized for higher performance.with more capable hardware and software The transition
began with the availability of very low cost personal computers (PCs) and the beginnings
of microprocessor technology use in distributed system functions such as in data displays,
servocontrollers, and control of peripheral devices such as temperature controllers Applications
software quickly used these enhanced capabilities and many types of tests were created that
used computer control
The third phase is the period we are currently experiencing where there has been a re-
integration of system control functions with data acquisition, function generation, and peripheral
control in the current digital control systems coupled with the use of higher-performance PC
or workstation hardware and modern software technology The emphasis is shifting from
hardware orientation to software The applications possibilities of some of these totally software-
based systems remain to be realized in third-generation applications software It is believed
that the extension of this phase will be not necessarily in radical changes to the automation
of the test system but rather in the connection of the test system to design, manufacturing,
and modeling functions within a given enterprise through networking and enhanced software
data sharing capability Also, the software-based nature of the control systems will be utilized
to implement truly adaptive control (autotuning systems or systems that optimize the control
parameters in response to changes in the test specimen) and to implement new tests based
upon the ability to use calculated parameters to control tests Each of these periods wilt be
discussed in more detail in terms of hardware, software, applications, and performance
Early Implementations (1965 to 1975)
Servohydranlic test system technology emerged in the late 1950s and early 1960s with
applications in structural testing and simulation being the first requirements These systems
used analog control based upon vacuum tube technology [ 1 ] By the mid-1960s, servohydraulic
test systems were becoming widely used for fatigue and fracture tests Several evolutions of
electronics technology were required before the vacuum tube-based controls were replaced
first by discrete transistor logic and then by integrated circuit technology Initial attempts using
analog computers for test automation provided significant enhancements to the basic closed-
loop capability [2] The desire to utilize an easier-to-program digital computer could not
be satisfied, however, until cost-effective digital computer hardware and software became
commercially available By the end of the decade, the commercial availability of minicomputer
systems provided the first opportunity to marry computer control to these electrohydraulic sys-
tems
These first implementations interfaced the minicomputer to the analog controller through
an analog interface in which digital-to-analog (D/A) converters were typically used as a
command reference (program source or function generator source) for the system and analog-
to-digital (A/D) converters were used to acquire data (measure and store forces, strains,
displacements, etc.) from the system Figure 1 illustrates the typical system architecture func-
tionally Figure 2 shows a typical system configuration from this period The computers used
were, by today's standards, limited The typical PDP 8 system manufactured by Digital
Equipment Corporation utilized limited ferrite core memory typically in the 4 to 8-k word
range, had limited processing power, and required a paper tape for program input and storage
Disk and tape technology usage became more viable as costs for these devices were reduced
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 12BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 7
I
L~
Trang 138 AUTOMATION IN FATIGUE AND FRACTURE
FIG 2 Example of a first-generation PDP 8 automated test system
The A/D and D/A converters used were typically 12-bit devices providing one part in 4096
resolution over - 1 0 V The software in the earliest systems was either machine language
based making programming the system a major ordeal, or a specialized assembly developed
for materials testing The "MTL" language developed by MTS Systems Corporation is an
example of one of these proprietary languages Much progress was made in this mode as
exemplified by the work of Conle and Topper [3], Richards and Wetzel [4], and Martin and
Churchill [5] Significant advances were made in performing strain-controlled fatigue tests
with calculated variable limit programming for load, strain, or inelastic strain A significant advance common to all of these works was the introduction of the computed variable control
capability discussed previously A good example of this approach is the tests that were developed
for axial strain control where the axial strain was calculated from the diametral strain [6]
The most significant limitations of these early systems were the severe memory limitations and the primitive programming environment for creating testing applications programs By
the middle of the 1970s, metal oxide semiconductor memory, MS I (medium scale integration),
and the use of higher level languages such as BASIC and FORTRAN brought about the next
phase of development in testing automation
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 14BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 9
The Minicomputer Refinement Period (1972 to 1980) and the Transition to Personal
Computers (1980 to 1985)
The advent of cheaper memory and higher performance processors as exemplified by the early members of Digital's PDP 11 family of minicomputers allowed for higher level language use on these systems to become feasible Languages such as FORTRAN and especially interpretive BASIC required another level of performance in the computer This additional performance was not required in the machine language/assembly language implementations This added complexity also required more memory in addition to a more powerful processor The early 1970s brought hardware meeting these requirements from companies such as Digital Equipment, Data General, and Hewlett Packard Mass storage had developed to the point where magnetic tape and disk subsystems were usable and the paper tape based systems were disappearing Computer manufacturers were also providing "operating systems" that managed system peripherals and memory and provided a structure upon which to build and use higher level programming tools
The basic system hardware architecture of the systems implementation did not change radically during this time A "processor interface" continued to bridge the space between the analog control system and the computer There were, however, some attempts to eliminate the analog controls also in some of the earliest direct digital control (DDC) systems at this time [7] Processor performance, however, severely limited the sample rate of these systems and forced the majority of implementations to use analog controllers A/D and D/A resolution initially was limited to 12 bits but increased to 14 and 16 bits in the late 1970s and early 1980s as higher-resolution higher-performance components became available Improvements
in function generation were developed that provided more localized hardware control of the D/A converter such as "segment generation" (where a local clock steps the D/A through a wave table and provides scaling), thus off-loading the computer from generating every D/A step and freeing up time for other tasks Similar developments were provided through local clocking of A/D input channels Also, other hardware features were developed for the "processor interface." Computer-controlled control mode switching, system monitoring, voltage sensing, digital input/output (I/O) logic, and computer hydraulic system shutdown capabilities were refined and then put under software control through callable library routines accessible in the high level programming language used with these systems
The most notable advances were accomplished in the software environment where higher level programming languages with built-in function calls to assembly language hardware control routines were used to make the task of developing test software somewhat easier The work of Donaldson et al described in Ref 8 is typical of the state of the art in the mid 1970s These systems at first were typically single station, that is, there was one computer and processor interface per test system Graphics capability emerged in the early 1970s allowing for data acquired to be plotted on a terminal screen and for plots to be outputted to plotter and hard copy units for reporting Figure 3 shows a typical system from this period of refinement The programming languages typically had a set of callable routines for graphics that allowed for "on-line" graphics to be shown during the course of a test To obtain the best real-time response possible, the operating systems for these computers were typically memory resident, nonswapping, and did not dynamically reallocate memory Digital's RTI l operating system was a typical example of this type of operating system This changed as hardware, peripheral, and memory performance increase toward the end of this period
During this time, test technology advanced with the enhanced computer power being utilized
to perform multiaxial test control with data acquisition [9] and stress intensity range controlled fatigue-crack growth tests [10] among many others The hallmark of this period, however, was that software technology was expanding to use the higher performance processors, addi-
Trang 1510 AUTOMATION IN FATIGUE AND FRACTURE
FIG 3 Automated test system from the early 1970s
tional memory and more readily available mass storage capability while, in general, maintaining
the need for a single computer to be dedicated to a single test system The predominant
computer suppliers were Digital Equipment Corporation, Data General, and Hewlett Packard
Transition Period
Increased performance in minicomputers, additional memory, and less expensive higher
performance mass storage facilitated the transition from single-station systems to multistation
and multiuser systems This is the culminating period of the development and use of minicom- puter systems in materials testing applications The subsequent availability of microprocessor
technology caused the next real evolution to occur It is interesting to note that during the
period from the late 1960s to the early 1980s the emphasis consisted of using a single processor for all tasks on a single system and then on multiple systems Processor interface technology,
programming languages, and operating systems concentrated on this philosophy
The multistation/multiuser systems that evolved in the late 1970s and early 1980s used the highest performance minicomputer technology available The Digital PDP 11/34 became, for example, a common platform upon which to implement some of these systems Figure 4 shows
a typical system configured to control five test stations performing fatigue-crack growth tests Extended addressing allowing for increased memory (the 11/34, for example, used 18-bit memory addressing), faster disk drives (allowing for swap oriented operating systems operating systems to be usable in real time), and operating systems designed for real time multi-user activity allowed the extension to multistation systems
Applications software did not necessarily change greatly during this time but rather was refined to utilize the higher performance The availability of microprocessor technology prior
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 16BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 11
FIG 4 -Multistation/multiuser system o'pical of the late 1970s and early 1980s
to desk-top personal computers allowed these systems to reach their ultimate form Figure 5 shows a service simulation system in which the multiuser minicomputer system controls two test systems as well as two microprocessor-based temperature/humidity chamber controllers while performing network transactions with a laboratory host computer As performance requirements began to exceed the capabilities of these multiuser implementations, microproces- sor technology was used in a new generation of processor interfaces to unload the beleaguered minicomputer Distribution of some functions such as environmental chamber control or auxiliary data acquisition was also used to off load the central minicomputer It was becoming obvious that the performance of these multiuser systems was very difficult to predict and that
it was very easy to over commit the resources of the minicomputer and its operating system despite the sophistication in these systems To perform multiple real-time tests with function generation, data acquisition, on-line calculated variable control, graphics, remote control of peripheral controllers such as chamber controllers, and carry out network transactions is about
as much as these single-processor architectures could handle and is a testimonial to the ingenuity
of the minicomputer vendors and those who utilized the technology
The final form of the minicomputer-based system was essentially a distributed system; with central lhnction generation, data acquisition, and test control still the domain of the supervisory minicomputer but with ancillary functions distributed elsewhere, Examples were microproces- sors embedded in the processor interface further refining function generator and data acquisition local control, use of external intelligent peripheral controls for environmental systems, and
Trang 1712 AUTOMATION IN FATIGUE AND FRACTURE
FIG 5 Multistation test system with environmental simulation control
"smarter" analog servocontrollers with functions such as mode switch or data display processing
controlled by local embedded microprocessors
This was the state of the art during the period 1978 to 1983 ASTM STP 710 on Computer
Automation o f Materials Testing from 1978 provides a wide overview of the state of the art
in this period [11] The essence of the symposium proceedings revolves around getting the
most out of the minicomputer architectures, their associated operating systems, and software
programming environments This period overlaps the availability of the first large-scale integra-
tion devices such as Digital's LSI-11 processors, but the design approach is the same These
systems were complex It was very difficult to predict the performance of the multiuser
environment Applications programming remained the responsibility of the user with little off-
the-shelf test software really available until the mid-1980s Thus, the system is only as easy
to use as the skill of the programmer would allow Some implementations with lower cost
single-user desk-top computers were beginning to emerge in response to the complexity of
the multiuser solution The further development of microprocessors essentially ended the era
of minicomputers on materials testing systems
Personal Computers and Materials Testing
The further development of microprocessor technology brought about the inevitable develop-
ment of cost-effective single-board computers, desk-top computers, and eventually personal
computers This evolution brought about a significant change in testing automation and acceler-
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 18BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 13
ated the end of the minicomputer as the materials testing automation solution of choice
Essentially, the change was economic These new microcomputers provided minicomputer
performance at very low cost Most of the minicomputer manufacturers realized this when it
was too late to respond and lost the market to a new group of suppliers Miller et al provide
an eloquent discussion of the merits of the central computer time-sharing implementation
versus distributed control [12] Their solution is the natural progression of the distribution of
intelligence discussed in the previous section
The availability of personal computers from IBM, Apple, and others caused a radical change
to the nature of materials testing automation in the mid 1980s The PC revolution alone was
not sufficient, however In addition, as in the previous evolution with the minicomputer, suitable
I/O hardware, operating systems, and programming tools had to become available to allow
the process to begin again with the new generation of computing hardware In some ways, a
step backwards was made in terms of hardware and software sophistication Much of the
I/O hardware available for PCs at this time was 12 bit rather than 16 bit, which was the
standard on minicomputer implementations In addition, many PC I/O modules did not support
simultaneous sample/hold data acquisition or allow simultaneous input and output in direct
memory access (DMA) mode The software callable routines for data acquisition, signal output,
and general purpose I/O lacked the customization for materials tests that had been engineered
into their minicomputer-based counterparts Finally, the operating systems were limited in
terms of how they supported concurrency and managed memory The proceedings of the
ASTM Symposium on Automated Test Methods for Fracture and Fatigue Crack Growth held
in Pittsburgh in November 1983 show the beginning of the transition [13] There is a mix of
papers with the majority discussing work performed with minicomputer architecture systems,
but there are also a number of papers discussing the use of Apple II, Hewlett Packard Desktop,
and other PC systems illustrating the beginning of the change It would take several more
years for the performance of these PC systems to increase significantly and for the quality
of the I / 0 hardware and software tools to be good enough to displace the minicomputer
implementations The prime driver, however, was the low cost of these systems making them
very attractive for laboratory automation, and this alone forced the transition
Personal Computers, Workstations, Digital Control, and Laboratory Automation
The period from 1985 to the present has seen the maturing of the PC platform In addition,
the required changes have happened with respect to available high performance plug in I/O
hardware for PCs Software operating systems, development systems, and user interfaces have
become available allowing for the proper utilization of the hardware and the implementation
of easy to use applications software for common materials tests and the creation of generic
test programs that may be used as tool boxes for addressing nonstandard tests without forcing the
user to write traditional software The industry acceptance of the IBM-compatible architecture as
a de facto standard has driven hardware costs down to incredibly low levels while providing
hardware performance that far exceeds the performance of the most costly minicomputer
systems available as recently as three years ago Microprocessor technology has advanced
along with software tools to allow for multiprocessor implementations We no longer have to
rely on a single computer to accomplish the task but rather multiple processors may be almost
casually assembled into systems for advanced control and data acquisition implementations
Two systems architectures are currently in use The most common architecture remains the
analog controller supervised by a computer with suitable I/O hardware for function generation,
data acquisition, and ancillary control The new analog controllers use the latest in low-noise
integrated circuitry and are much less expensive than their predecessors They are designed
with the intent of PC automation The state of the art with respect to systems of this type is
Trang 1914 AUTOMATION IN FATIGUE AND FRACTURE
somewhat captured in the proceedings of the 1989 ASTM Symposium entitled Applications of Automation Technology to Fatigue and Fracture Testing [14] By this time the IBM architecture
dominates the examples of systems presented The Apple II implementations have disappeared
to be replaced by the Apple Macintosh One paper uses a UNIX workstation [15] for its
multitasking features and graphical user interface This type of implementation is a forerunner
of the more advanced systems discussed later
The other system implementation that is more recent in design is the digital control system These systems continue to utilize analog signal conditioning, and servovalve driver electronics but the control loop, function generation, and data acquisition are implemented in software Due to the high sample rate required to maintain reasonable system accuracy and the parallelism inherent in materials testing systems, most if not all of these digital control systems tend to
be multiprocessors It should be noted that at this point the journey has been made from the design where a single processor controlled multiple systems to the point where multiple processors (each singly more powerful than the single processor used for the multistation implementation) control a single system The process of distribution of processing power in these systems is becoming fully developed In these systems, the PC becomes that supervisor, the development environment, the host for the system user interface, and the site for data analysis, networking, and report generation The multiprocessor has multiple high-performance processors distributed and networked via a network or bus in a manner such that processor power may be dedicated to each specific function within the system One such system on the market currently uses nine processors connected through 20 million bit/s links to perform digital control, data acquisition, and system housekeeping The same 20 mbit/s connection connects an industry standard PC to the multiprocessor completing the system A block diagram for this digital control system is shown in Fig 6 A typical system as actually implemented
in hardware is shown in Fig 7 Communication within the multiprocessor and with the PC becomes a key design consideration along with operating system choice and user interface
9 User Interface * DDC Servo Control
9 Test Execution 9 Data Acquisition
9 Development 9 Function generation
9 Readout
9 Hydraulic System Control
i ~ - " ' - ~ '.,'~.i
ir ' i!
"'r i : i.~
Local Control Panel
9 Specimen Loading
9 Local Status Display
9 Hydraulic Control
9 Run/Stop Control
FIG 6 Digital system with PC user interface functional
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 20BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 15
FIG 7 Typical test system with current-generation digital controls
design The state of the art seems to be a very high speed communications facility used within the multiprocessor and as a link to the host computer, a multitasking, virtual memory, operating system that allows multithreaded execution and preemptive priority scheduling, and a graphical user interface or GUI to allow for intuitive operation of the system A history of the development
of such a system is discussed in Ref 16 Many of the tradeoffs made to obtain a solution such
as the one identified earlier are discussed in Ref 16
The software-based nature of a digital system allows for greater flexibility and a high degree
of reconfigurability This is especially true of a random access memory (RAM)-based system where the operating software for the control system is downloaded from the host In this case,
a system can be updated with new software and the new features may be utilized the next time the multiprocessor is booted from the host A software-based user interface (such as one implemented via a graphical user interface, or GUI) is desirable over hard controls such as buttons or knobs In the same manner, the user interface may be redesigned or modified and the changes may be used without modifying or altering hardware This type of system implementation ultimately provides lower maintenance and update cost and, hence, lower cost of ownership A modular hardware architecture that allows for control system software modifications and user interface modifications through software updating rather than hardware
Trang 2116 AUTOMATION IN FATIGUE AND FRACTURE
modification will provide the necessary platform for advanced controls and user interface
technology as they become available
Applications software at this time appears to be taking two paths One path provides the
traditional standard tests in turnkey applications software These packages typically address
tests that are well defined by existing standards The other route is a general-purpose application
program that may be configured ("programmed" but at a very high level) to perform a variety
of control, data acquisition, and logical tasks allowing for nonstandard tests to be accomplished
Most vendors of test equipment now provide applications programs of these two types Program-
ming tools exist for these systems as in the past but the occurrence of user programming is
becoming less with time as the expectations of the user community become higher concerning
software for materials testing Expectations with respect to ease of use are much higher due
to the widespread use of GUIs such as OS/2's Presentation Manager, Microsoft's Windows,
Apple's Macintosh OS, and Motif on top of UNIX It is often joked that if a user has to read
a manual, the software is substandard This is becoming less of a joke and more of a test of
the quality of the software as we move forward
Futures
It seems logical that the emergence of the software-based systems that have been discussed
will allow for enhancement of both machine control capabilities and of applications program-
ming Since the servocontroller, function generator, and other system components are now
programs or software objects, it seems that these objects will evolve and that other objects
with additional capability will be created One merely has to look at National Instruments'
LabView product [17] and their concept of a virtual instrument, built in a graphical editor on
a Macintosh, to understand the potential of software-based digital materials test system control-
lers Some areas where one could expect development would be."
1 Truly adaptive control algorithms that learn from the system and test specimen
2 Calculation of and control from virtual channels where mathematical operations may
be performed on incoming data streams and the resultant virtual data used for control
or other decision making
3 Feedback linearization and scrvovalve linearization Where the acquired feedback and
the valve drive output are compensated by system for nonlinearities inline on the
executing system
4 Advanced function generation capability for forcing system compliance through over
programming in random loading applications
In the user interface area, similar capability may be utilized to alter the look and feel of
the system to better meet user requirements Multitasking capability and interprocess communi-
cation could be used to teach models on-line through the observation of data returning from
an executing materials test Similarly, executing test software could, directly through laboratory
networking and database technology, update a designer at another location to facilitate simulta-
neous engineering of components and structure The author feels that most of these concepts
are achievable on the current platforms and that the most significant gains over the next few
years will be in the area of software enhancements to these new control architectures and the
integration of these systems into enterprise-wide modeling, manufacturing, and design activities
S u m m a r y
The historical development of materials testing automation as it applies to fatigue and fracture
testing has been discussed covering the period from 1965 through 1992 Some speculation has
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 22BRAUN ON HISTORY OF COMPUTER-AIDED MATERIALS TESTING 17
also been presented regarding the direction of development in this area over the next three to five years This survey has been created to essentially review the progress in this area and provide background information to supplement the proceedings of the symposium of which this paper was a part It is presented with the thought that often you cannot understand or see where you are going unless you know where you have been, and how you got there The reader is strongly encouraged to investigate the references in this paper as they are the true documentation for the developments in this area
References
[1] Johnson, H C "Mechanical Test Equipment in the Sixties: A Decade of Radical Change," Closed Loop Magazine of Mechanical Testing, MTS Systems Corporation, Minneapolis, MN, Fall/Winter
1974, pp 15-21
[2] Richards, E D and Wetzel, R M., "Mechanical Testing of Materials Using an Analog Computer,"
Materials Research and Standards, Feb 1971, pp 19-22
[3] Conle, E A and Topper, T H., "Automated Fatigue Testing," Closed Loop, MTS Systems Corpora-
tion, Minneapolis, MN, Spring 1971, pp 8-10
[4] Richards, E D and Wetzel, R M., "Application of Analog and Digital Computers to Fatigue Testing," Ford Motor Company Scientific Research Staff Report SR 71 - 138, Ford Motor Company, Dearborn, MI, Sept 1971
[5] Martin, L M and Churchill, R W., "Interfacing the Computer to a Materials Test System,"
Proceedings, Spring Meeting, Society for Experimental Stress Analysis, May 1969
[6] Slot, T., Stentz, R H., and Berling, J T in Manual on Low Cycle Fatigue Testing, ASTM STP
465, American Society for Testing and Materials, Philadelphia, 1969
[7] Boggs, B C., Mondol, N K., McQuown, R E., and Anderson, J G., "Digital and Analog Computer
Equipment and Its Application to In-House Testing," Use of Computers in the Fatigue Laboratory, ASTM STP 613, American Society for Testing and Materials, Philadelphia, 1976, pp 2-26
[8] Donaldson, K H., Dittmer, D E, and Morrow, J., "Fatigue Testing Using a Digital Computer-
Based System," Use of Computers in the Fatigue Laboratory, ASTM STP 613, American Society
for Testing and Materials, Philadelphia, 1976, pp 27 49
[9] Penn, R W., Fong, J T., and Kearsley, E A., "Experience in Data Acquisition and Reduction for
a Biaxial Mechanical Testing Program," Use of Computers in the Fatigue Laboratory, ASTM STP
613, American Society for Testing and Materials, Philadelphia, 1976, pp 78-93
[10] Kaisand, L R and LeFort, P., "Digital Computer Controlled Threshold Stress Intensity Factor Fatigue Testing," Use of Computers in the Fatigue Laboratory, ASTM STP 613, American Society
for Testing and Materials, Philadelphia, 1976, pp 142-159
[11] Computer Automation of Materials Testing, ASTM STP 710, B C Wonsiewicz, Ed., American
Society for Testing and Materials, Philadelphia, 1978
[12] Miller, N R., Dittmer, D E, and Socie, D E, "New Developments in Automated Materials Testing Systems," Automated Test Methods for Fracture and Fatigue Crack Growth, ASTM STP 877,
W H Cullen, R W Landgraf, L R Kaisand, and J H Underwood, Eds., American Society for Testing and Materials, Philadelphia, 1985, pp 9-26
[13] Automated Test Methods for Fracture and Fatigue Crack Growth, ASTM STP 877, W H Cullen,
R W Landgraf, L R Kaisand, and J H Underwood, Eds., American Society for Testing and Materials, Philadelphia, 1985
[14] Applications of Automation Technology to Fatigue and Fracture Testing, ASTM STP 1092, A A
Braun, N E Ashbaugh, and E M Smith, Eds., American Society for Testing and Materials, Philadelphia, 1989
[15] McKeighan, P C., Evans, R D., and Hillberry, B M., "Fatigue and Fracture Testing Using a Multitasking Minicomputer Workstation," Applications of Automation Technology to Fatigue and Fracture Testing, ASTM STP 1092, A A Braun, N E Ashbaugh, and E M Smith, Eds., American
Society for Testing and Materials, Philadelphia, 1989
[16] Braun, A A., "The Development of a Digital Control System Architecture for Materials Testing Applications," Proceedings, 17th International Symposium for Testing and Failure Analysis, Los
Angeles, Nov 1991, American Society for Metals International, Metals Park, OH, 1991, pp 437-
444
[17] "LabView 2 Reference Documentation," National Instruments, Austin, TX, 1990
Trang 23S o m a s u n d a r a m D h a r m a v a s a n t a n d S a r a h M C P e e r s t
General Purpose Software for Fatigue
Testing
REFERENCE: Dharmavasan, S and Peers, S M C., "General Purpose Software for Fatigue
Testing," Automation in Fatigue and Fracture: Testing and Analysis, ASTM STP 1231, C
Amzallag, Ed., American Society for Testing and Materials, Philadelphia, 1994, pp 18-35
ABSTRACT: Recently, most manufacturers of servohydraulic materials test equipment have
developed digital control systems Digital systems have several advantages over the more mature analog controllers, in particular, the availability of features that allow more realistic tests to be carded out Also, the ability to interface and communicate with computers easily makes it possible to perform not only realistic but also very complex control and data acquisition tasks However, the software required to define these complex tasks and communicate with digital controllers is demanding
The development of digital controllers has been made possible by the tremendous advances
in microprocessor technology These developments have also given rise to computers with ever- increasing capabilities at lower costs The software architectures controlling these computers have also given rise to sophisticated developments in order to make the computers easier to use Most computers now have some form of graphical user interface (GUI) The downside of most GUIs is that the development of application software is much more complex In addition, most of these operating systems are not ideally suited for real-time control that is absolutely vital for automated fatigue testing
The difficulties associated with developing software for specific fatigue tests may be somewhat alleviated by the development of suitable tools within an integrated architecture that allow the scientist/engineer to define the required test in the form of functional blocks This type of general-purpose software can then take the test definition provided by the scientist and convert
it into a form suitable for the digital controller
A suitable test description paradigm is proposed in terms of general requirements; a test- description language with elements such as objects, functional blocks, time, events, and data presentation; and some specific requirements for a software development environment From this, general-purpose software for automated fatigue testing can be developed and used at a high level of abstraction In addition, the paper reviews the development of an integrated general- purpose program for automated fatigue testing that implements the functional blocks required but without the full test description language The paper also uses the example of testing offshore structural components using realistic sea-state sequences, such as WASH, to illustrate the problems that must be solved
Some implications of this approach in future fatigue testing applications are also discussed
in the paper
KEY WORDS: automated fatigue testing, digital control systems, servo-hydraulic test equip- ment, data acquisition, alternating current potential drop, crack growth monitoring, fracture (materials), fatigue (materials), testing methods, data analysis, test automation
The increasing complexity o f structures and components are made possible by the availability
of advanced modeling and analysis tools such as finite element methods Although the use o f these tools has resulted in the design of efficient components, they are still prone to fatigue
1Lecturers, NDE Centre, University College London, London WC1E 7JE, UK
18
Copyright* 1994 by ASTM International www.astm.org
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 24DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 19 damage The modeling of fatigue behavior in test samples and components is the subject of
intense research For example, in the offshore industry, the tubular welded joint, a commonly
used structural component, has been the focus of attention for several years [1] The type of
testing carried out on these joints has progressed from simple constant-amplitude tests to
complex wave forms that simulate the random wave loading experienced by offshore structures
In addition, the tests have been carried out in a seawater environment with cathodic protection
The data acquisition requirements are also quite demanding and have included crack-depth
measurement using an alternating current potential drop (ACPD) system [2]
The limiting factor in increasing the load carrying capacity or efficiency is the availability
of suitable materials For example, structural ceramics are capable of withstanding extremely
high temperatures and are being actively researched for use in high-efficiency engines The
development of nonmetallic materials requires new mechanical test methods that are capable
of quantifying the strength characteristics for use in analysis methods
The majority of the tests need some form of computer-controlled testing due to the complexity
of loading or data acquisition requirements and, in view of this, most manufacturers of
servohydraulic test equipment have developed digital control systems, or in other words,
machine controllers that are programmable by a host computer Digital systems have several
advantages over the more mature analog controllers as they have immediate (within one clock
cycle) response times and, hence, step changes can be made to parameter values In contrast,
some of these features are not possible with analog controllers Also, the ability to interface
and communicate with computers easily makes it possible to perform not only realistic but
also very complex control and data acquisition tasks However, the software required to define
these complex tasks and communicate with the digital controllers is demanding in terms of
performance The development of the software is also made difficult by the real-time nature
of the problem and requires a good understanding of the behavior of the controller as well as
the requirements of the test
The development of digital controllers has been made possible by the tremendous advances
in microprocessor technology These developments have also given rise to computers with
ever-increasing capabilities at lower costs The software architectures controlling these comput-
ers have also given rise to sophisticated developments to make the computers easier to use
Most computers now have some form of graphical user interface (GUI) The downside of
most GUIs is that the development of application software is more complex In addition, most
of these operating systems are not ideally suited for real-time control that is absolutely vital
for automated fatigue testing
The difficulties associated with developing software for specific fatigue tests may be some-
what alleviated by the development of suitable tools within an integrated architecture that
allow the scientist or engineer to define the required test in the form of a suitable high-level
description This type of general-purpose software can then take the test definition provided
by the scientist and convert it into a form suitable for the digital controller This paper will
describe a suitable test description paradigm that has been implemented to work with a digital
controller and the problems that had to be overcome Some implications of this approach in
future fatigue testing applications are also discussed
A Fatigue Test Description Paradigm
Functional Blocks
A laboratory fatigue test can be abstracted to a sequence of well-defined processes that may
or may not be carried out depending on conditions or circumstances A test can then be
described as a sequence of functional blocks, each of which has to have certain properties
Trang 2520 AUTOMATION IN FATIGUE AND FRACTURE
defined Examples of functional blocks are sine waveform generation, collection of data from
strain channel, limit setting, etc
Functional blocks may be carried out in parallel In particular, the processes of data acquisition
and of control are independent of one another in that they may be carried out, and usually
are, in parallel
Requirements for the Test-Description Language
1 In common to any programming language, the primary requirement is that of complete-
ness The language should allow all elements of the test to be described fully In addition,
it should be flexible so that it does not constrain the design of a test
2 Real-time language As any fatigue test is a demanding real-time application, it should
have constructs that define time constraints In addition, there should be facilities to
recover from situations where there is insufficient time to carry out a task
3 Modularity Tests are often repeated in modified forms, hence, the ability to reuse
functional blocks of any test is important for practical (reduced time in developing a
test) and reliability reasons (fully tested functional blocks increase reliability) This is
best done by allowing the definition of hierarchies of functional blocks, that is, blocks
within blocks
4 Predictability and reliability The failure of a fatigue test may lead to the possibility
of a dangerous situation arising Thus, the language should be, as a minimum, predictable,
that it should not be possible for unexpected sequences of events to occur For further
safety, the language should provide a way of describing safe or expected behavior and
of defining error recovery actions This could then allow the recognition of potentially
dangerous situations, that is, of behavior outside the defined safe behavior that would
then trigger an appropriate recovery action
5 Built-in or stored knowledge reduces design time Examples of such knowledge include
materials and their properties, test-piece geometries, graphs and report forms for the
display of data, etc Also information on the restrictions imposed by hardware limitations
must be made available to the test designer For instance, the rate of data acquisition
is often limited by the hardware Tests could be stored as a templates, which define
the test completely except for only a few parameters to be input at the time of rerunning
of the test
6 As more knowledge is built in to the system, it may be possible to predict the performance
of the system to ensure that dangerous situations do not occur Storage of past test
results in the appropriate form requires intelligence The appropriate form is one that
allows future inductive or analogical reasoning about the behavior of similar tests
Elements of the Language
The description language needs to have the following elements in order to fully describe a test
Objects or Entities "Objects" or entities are derived from the object-oriented programming
paradigm that includes specialization, inheritance, and the representation of relationships
between objects [3] Each type or "class" of object will have expected properties and will be
used in particular ways
Examples of objects are: the test-piece, materials, test hardware, computer records and files
for data acquisition, data structures (including arrays), graphs, databases, and functional blocks
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 26DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 21
Functional Blocks, Processes, and Scripts A process can be defined as a specialized object with a particular role A functional block, another specialized object, provides a "script" of processes or other functional blocks to be performed and defining the objects that are involved
in the processes The concept of a script is borrowed from the field of artificial intelligence [4], but here it is used merely as a practical way of describing a procedure to be carried out The designed test itself is again a functional block
The base process classes are:
1 data acquisition,
2 control/loading, and
3 input and output to the user interface
To fully define the script requires control constructs for sequences, parallelism, iteration and repetition, and conditional logic
Time As fatigue tests can last from a few seconds to several months, a clock capable of working at different resolutions must be maintained In addition, a virtual clock may be necessary for the running of simulated tests for the purposes of checking the overall test design
Events The description of an occurrence can be made clearer and more concise if it is possible to give an abstract description of the event Hence, the implementation of the language must provide recognition and identification of given events For example, an event may be given as "the failure of the test-piece," and there must be a way therefore of describing how and when the test piece is said to be failed A complete definition of "failure" is not always possible and hence recognition is required
For recognition, pattern matching or fuzzy logic or both [5] for the use of qualitative information may be employed However, the use of such concepts does not always coincide with the required predictability of the language Whatever recognition process is used must
be well-defined and therefore, such advanced techniques may not be appropriate for this field
at present
Data Analysis and Presentation During and after the completion of a test, there will be
a need to analyze the data obtained from the test Therefore, an ability to define calculations and also to link to external calculation libraries is required The link to external libraries is important as there is a wealth of validated and tested programs already available and it will
be impossible to incorporate all of this into a limited language
Some of the features required are:
1 data acquisition and signal processing functions,
2 general mathematical functions,
3 statistical functions,
4 random number generators,
5 numerical techniques such as interpolation, and
6 graphical presentation of data, including conventional forms for the display of fatigue test data
Software Development Environment
For ease of development of the test, various forms of the test description must be allowed The ideal is that of a mixture of graphical elements with textual input The description of a
Trang 2722 AUTOMATION IN FATIGUE AND FRACTURE
test is essentially procedural to which traditional flow-charts are ideally suited Hence, a description of the test could be carried out using graphical flow-charting symbols and arrows with appropriate text and references to entities Such a graphical flowchart would then need
to be translated into low-level test control instructions
In defining the requirements for data acquisition in a test, it is easier to be able to view a predicted graph of results as given by a preliminary definition of the process to be able to judge that sufficient data will be acquired Furthermore, a facility that would allow the modification of the amount and type of data to be acquired by pointing and selecting appropriate sections of the graph would encourage increased effectiveness of the designed test
Testing of a design is vital One method is that of simulation, although there are difficulties
in providing true simulation due to the uncertainties associated with damage mechanisms of different materials and components Therefore, a pseudo-simulation where the user triggers certain actions is probably the only way in which testing prior to carrying out the test can be achieved However, the interpretation of the simulation results can be difficult particularly when only presented as numerical data Simulation of the test shown graphically (stick diagrams
of expected effects of loading on test-pieces and graphs of data) is useful to allow the user
I
!
General Purpose Interface Bus link
FIG l Schematic diagram of test setup
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 28DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 23
behind FLAPS is to design the required test off-line The designed test is then used by FLAPS
to carry out the test The general structure of the program is shown in Fig 2
System Setup -The System Setup phase is used to select system-dependent information, such as type of controller being used and the type of crack measurement system, in addition
to setting up the various parameters necessary before running a test such as calibration, selection
of engineering units, and other parameters This information is used during the Run Phase to control the tests in the most appropriate manner
The ability to store and recall setup data rapidly is useful in cases where there is a need to change from one type of testing to another The main advantage of digital systems is the ability to restore the overall system to a predetermined state This aids with repeatability of tests
Design The Design phase allows the setting up of templates known as Control Files to run specific applications The various waveform generation, data collection, and program control options are chosen and the sequence of operations is defined This process is carried out prior to testing and provides a library of test routines
It was recognized that for sophisticated tests, the sequence of events had to be considered
in detail and this information then programmed It is in the programming of complex sequences that the Design phase provides a simplified method by providing several high-level block types that can be linked in any sequence These high-level functional blocks will be described
in the next section Using traditional programming languages, the user would have to program each step
Test Preview One of the main problems in programming complex sequences for fatigue testing is that in the event of a mistake a valuable sample could be destroyed or substantial time lost In order to minimize this risk, a Test Preview program was developed This program
Trang 2924 AUTOMATION IN FATIGUE AND FRACTURE
reads in the Control File as input and displays events that make up the test graphically, such
as data acquisition actions, loading changes with profiles, conditional tests on input or output data, etc The waveforms and the points at which data acquisition will occur are displayed on
a time or cycle axis A sample screen is shown in Fig 3 Visual representation of real-time sequences or simulation has been attempted by several workers [8] but, invariably for complex problems, the visual representation can be difficult to interpret Work is being carried out at University College London (UCL) on a hierarchical graphical representation of real-time pro- cesses
calibration and units information from the settings file, to control a test This module has a detailed knowledge of the controller in use and sends the appropriate instructions to the controller The current range of digital controllers handle a substantial amount of real-time processing thereby freeing up the host computer to concentrate on analyzing and presenting data in addition to controlling the test
including presentation-quality graphics The analysis part may be performed with commercially available spreadsheet packages as facilities were developed for transferring data from the FLAPS database structure to other programs However, for numeric intensive processing, spreadsheets can be very inefficient For example, a simple least squares fit can take several
FIG 3 -An example of a test sequence
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 30DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 25 seconds using a commercial spreadsheet whereas when written using a traditional compiled
language such as FORTRAN or C the same result can be achieved in a fraction of a second
It is possible with modern operating systems to dynamically link user-written programs with
this module
Waveform Generation and Control
The different types of waveform required to provide a comprehensive range of fatigue test
simulations are as follows:
Simple Waveforms For most applications, a combination of constant amplitude waveforms
such as sine, square, and triangular are adequate These waveform types are generated by the
digital controller by sending relatively simple commands It is also possible to link several
blocks of different amplitudes, mean level, and frequency together to perform simple block
loading tests
Ramp-type loading is also possible using the ramp generator built into the controller
Service Data Playback Several industries have developed standardized load sequences
that are representative of the load or stress experienced by a typical component The WASH
(Wave Action Standard History) is such an example and is used in the offshore industry The
information on the sequence is stored in the form of turning-point magnitudes (that is, the
peak and trough value) In this way, the amount of information to be stored is substantially
reduced while maintaining the characteristics of the sequence
The peak and trough values are sent to the digital controller, and the controller then fits a
haversine or straight lines between the peak and trough values This type of architecture
substantially reduces the load on the computer allowing time for more data acquisition and
supervisory tasks
Alternatively, the actual digitized data points could be stored However, this information
must be pre-processed after recording to remove spikes and other spurious data
Random Load History from Power Spectral Density Another more generalized way of
generating realistic sequences that contain not only the characteristic amplitude but also the
frequency component is to define a characteristic "power spectral density" (PSD), and then
to obtain the time series from this PSD definition
For each characteristic power spectrum, ~,(to), a digital filter, h,(t), can be found This filter
is the discrete inverse Fourier transform of H,(to), the transfer function of qbx(co) Therefore
where dp~(to) is the power spectrum of a white noise (uniform spectrum) Function hx(t)
effectively amplifies all the desired frequencies so that unwanted frequencies remain but only
as an insignificant part of the time history The desired time history is therefore given by
f x
"qx(t) = hx(t)e(t - "r)d'r
0
(2)
The white noise sources, e(t), are generated by the pseudorandom binary shift (PRBS) register
technique This technique makes use of a register containing a series of digits, 0 and 1 At
every clock pulse (signal output time), all of the digits are shifted one place to the right The
Trang 3126 AUTOMATION IN FATIGUE AND FRACTURE
last digit is abandoned and the first is formed from either a two-way or a four-way programmable
feedback loop (Fig 4)
The advantage of the PRBS technique is the excellent frequency control as is demonstrated
in Fig 5 that shows the target and generated spectrum
The frequency content is considered one of the more important factors in corrosion fatigue;
therefore, the PRBS method is considered necessary
The random history within a sea-state can be non-Gaussian for some cases This behavior
can be simply modeled by raising the generated time history to a certain power More sophisti-
cated models using windowing techniques are also available [9] However, these techniques
require more information concerning the load history than just the power spectra This extra
information is still very limited and, therefore, this extension has not been implemented
Data Acquisition
One of the main requirements in data acquisition for fatigue testing applications is the ability
to collect data at specified times or events In the past, it was difficult to achieve this and, as
a result, a vast amount of data was collected in order not to miss any important event However,
the digital controller provides several ways in which the actual occurrence of an event can be
detected and the data collected on the occurrence of the specified event In this manner, data
can be collected intelligently and in manageable amounts
The following types of data can be collected using FLAPS:
1 position, load, or strain feedback,
2 peak and trough data, and
3 ultimate peak or trough
In addition, it is possible to commence data acquisition on an increment of feedback or
increment of time or cycles (linear or logarithmic) with this information It is also possible
to collect data at the instant o f failure o f the sample
Conditional Logic
As explained previously, FLAPS attempts to provide the control structures necessary to
design a complex sequence of testing In order to achieve this, several block types have been
developed to provide actions that will allow the required flexibility
The most basic of these constructs is the Unconditional Loop feature that allows the test
to loop back to a predetermined stage In addition, the looping can be combined with certain
conditions being achieved This is possible by the provision of logical operators such as "less
than" or "greater than." These operators can be applied to collected data or other internal
counters or timers to provide extremely powerful control features
Some of the actions that may be performed on a condition being met are:
One of the major problems faced in designing a general-purpose program for engineering
applications is the diversity and complexity of calculations that may need to be carried out
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 3328 AUTOMATION IN FATIGUE AND FRACTURE
In the case of FLAPS, it is very difficult to define the type of analysis that needs to be carried
out on the collected data One way of overcoming this would be to export the data to other
analysis programs This has been done to some extent in that there is a facility to export data
to commercially available spreadsheets This will provide simple postprocessing capabilities
For complex and intensive calculations, the spreadsheet approach is not practical due to the
penalties imposed in terms of speed of calculation Also the implementation of some of the
algorithms that may already be coded in a high-level language such as FORTRAN or C in
the dialect of the spreadsheet may be difficult Therefore, a feature for linking calculation
libraries that are user written at run-time has been developed
Application to Fatigue Testing of Tubular Joints
Tubular-welded joints are a commonly used structural component in fixed offshore platforms
These joints are subject to fatigue damage and, as a result, several major research programs
have been carried out [1] Due to the geometric complexity of these components, most of the
early work concentrated on producing stress-life (S-N) information Crack growth data, on the
other hand, were obtained from testing small-scale and tubular joint specimens in air and a
corrosive environment (seawater) under constant amplitude loading Therefore, it has become
necessary to test tubular joints under a realistic load history in order to correlate realistic
behavior with that observed from simpler load histories (such as constant-amplitude sine-
wave loading)
The development of realistic service stress histories and how these are implemented in
FLAPS is described subsequently
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 34DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 29
Realistic Service Stress History
The realistic history is formulated from information extracted from extensive in-service
load history monitoring projects [10] The monitored results, however, will be unique to the
location, platform dimensions, configuration, payload, foundation behavior, and other factors
Therefore, it is necessary to extract from these lengthy records, the most salient features
relevant to fatigue At the same time, as many characteristics of the load history as possible
should be incorporated in order to avoid omitting any hidden factors related to fatigue, which
may not be evident from current knowledge
The in-service load history was found to behave like a random sequence of short sea states;
therefore, the long-term root mean square (zero mean) of stress/strain varies continuously (Fig
6) The short sea states were found to be generally stationary, and their frequency content can
be described by broad-band, double-peak power spectra (Fig 7) In the majority of cases, the
random load history can be approximated as a Gaussian process The most noticeable exception
is the loading on small-diameter secondary members near the mean sea level The non-Gaussian
effect is principally caused by the nonlinear drag response of the structural members
Wirsching proposed a series of eleven sea states for fatigue reliability analysis of offshore
structures [11] This series makes use of an equation combining the Bretschneider wave spectra
with a nominal response peak to describe the sea-state structural response spectra Based on
the same idea, Hartt and Lin [12] developed a six sea-state (Fig 8) sequence suitable for
fatigue testing The random sea-state sequence is dependent on the long-term occurrence
statistics (fractions of time) and is generated by a Markov chain technique
An international committee was set up to carry out a detailed study of all the recent North
Sea monitoring results with the objective of producing a realistic wave-loading sequence The
proposed standard known as WASH was based on the concept developed by Hartt but also
takes into account statistical sea-state durations The resulting sequence contains a series of
twelve sea states [13]
In the recommended WASH standard history, the two highest sea states have been combined
with the third highest state because of the very small occurrence probability of the former In
addition, in order to compress the lengthy history into a reasonable size suitable for laboratory
testing, the lowest two states are omitted, and the probability of occurrence of the third lowest
state has been reduced to 4.7% Therefore, the WASH standard load history comprises the
"most relevant" 20% of year-round history Further development to incorporate the non-
Gaussian/nonlinear effect is still continuing
The WASH sequence has been generated using the random load generator described earlier
and the resulting time series converted to peak-trough playback data compatible with FLAPS
A sample of the WASH sequence as seen in the FLAPS Test Preview screen is shown in Fig 9
Crack Growth Data in Air and Seawater
The crack growth in the tubular joints was monitored using the ACPD technique [14] Fixed
probes were attached around the welded intersection and connected to the ACPD instrument
through a multiplexer that is controlled by FLAPS A schematic of the test setup is shown in
Fig 10
This arrangement allowed the crack shape evolution to be monitored A sample of the crack
shape data obtained is shown in Fig 11
Future Developments
The software described here, FLAPS, has proved to be very versatile although only part of
the proposed fatigue test description paradigm is used in the implementation However, as
Trang 36DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 31
Qa
._
Frequency (8z)
FIG 7 Typical strain power spectrum
computers become more powerful and the amount of features that are at present carried out
by the host computer are moved to the digital controller, it will be possible to implement the
complete description language
Other developments that are taking place in artificial intelligence and object-oriented pro-
gramming technology will also allow the encapsulation of further knowledge of materials
testing within the computer At present, this can be achieved by using dedicated knowledge-
based system development tools to create a test sequence for a particular type of test For
example, the user could specify that a low-cycle fatigue test on a new alloy was required
The system could create a task list with appropriate parameters using embedded knowledge
about FLAPS and the type of test
The Role of Standards
The increasing use of computers in controlling materials test machines, data acquisition
equipment, and other measurement devices makes it important to provide standards so that
equipment from different manufacturers may be used and data interchanged freely At present,
due to the device or computer specific nature of computer-based systems, it is not always
possible to interchange data easily In addition, some of the algorithms implemented to analyze
data depends on the software implementation This has obvious implications in comparing
test data obtained from different sources In view of this, an ISO Working Group has been
set up to draft a suitable standard
Conclusions
A fatigue test description paradigm was proposed in this paper so that general purpose
software for automated fatigue testing can be developed and used at a high level of abstraction
This paper has reviewed the development of an integrated general-purpose program for
automated fatigue testing that implements the functional blocks required but without the full
test description language Even with this limitation, the program was found to be versatile
One of the main findings in using such a program is that the time required to set up and
carry out a complex fatigue test is reduced substantially In addition, due to the high level of
abstraction, the engineer/scientist can concentrate on the application rather than the details of
the controller or device
Trang 38DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 33 The program has been used for corrosion fatigue testing on tubular-welded joints with
complex service load simulation and extensive crack growth data acquisition in order to
characterize the crack shape evolution in both air and seawater
Trang 393 4 AUTOMATION IN FATIGUE AND FRACTURE
FIG lO -Schematic of corrosion fatigue test setup
Copyright by ASTM Int'l (all rights reserved); Wed Dec 23 19:20:12 EST 2015
Downloaded/printed by
University of Washington (University of Washington) pursuant to License Agreement No further reproductions authorized
Trang 40DHARMAVASAN AND PEERS ON GENERAL PURPOSE SOFTWARE 35
[2] Topp, D A and Dover, W D., "Review of ACPD/ACFM Crack Measurement Systems," Review
of Progress in Quantitative NDE, Vol 10A, D O Thompson and D E Chimenti, Eds., Plenum
Press, New York, 1991, pp 301-308
[3] Stefik, M and Bobrow, D G., "Object-oriented programming: Themes and Variations," The AI Magazine, Vol 6, 1984, pp 40-62
[4] Minsky, M., "A Framework for Representing Knowledge." The Psychology of Computer Vision,
McGraw-Hill, New York, 1975
[5] Zadeh, L., Fuzzy Sets and their Applications to Cognitive and Decision Processes, U.S.-Japan
Seminar on Fuzzy Sets and their Applications, Berkeley, CA, 1974
[6] Dharmavasan, S., Broome, D R., Lugg, M C., and Dover, W D., "FLAPS A Fatigue Laboratory
Applications Package," Proceedings, 4th International Conference on Engineering Software, Adey,
R A., Ed., Springer-Verlag, London, 1985
[7] Instron 8500 Digital Controller Command Set Manual, Instron Corporation, Canton, MA, 1990
[8] Jones, S., Graphical Interfaces for Knowledge Engineering: An Overview of Relevant Literature,
The Knowledge Engineering Review, Vol 3, No 3, Cambridge University Press, Cambridge, UK,
[11] Wirsching, P H., "Preliminary Dynamic Assessment of Deep-water Platforms." Journal, Structural
Division, American Society of Civil Engineers, Vol ST7, July 1976, pp 1447-1462
[12] Hart, W H and Lin, N K., A Proposed Stress History for Fatigue Testing Applicable to Offshore Structures, University of Florida, Gainesville, FL, 1985
[13] Olagnon, M., "Characterisation of Sea States for Fatigue Testing Purposes," Proceedings, Confer-
ence on Offshore Mechanics and Arctic Engineering, American Society of Mechanical Engineers, Houston, TX, 1988
[14] ACFM Crack Micro-Gauge Model UIO User Manual, Technical Software Consultants Ltd, Milton
Keynes, UK, 1990