1. Trang chủ
  2. » Thể loại khác

The rock physics handbook, second edition

525 258 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 525
Dung lượng 5,13 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The Rock Physics Handbook, Second EditionTools for Seismic Analysis of Porous Media The science of rock physics addresses the relationships between geophysicalobservations and the underl

Trang 3

The Rock Physics Handbook, Second Edition

Tools for Seismic Analysis of Porous Media

The science of rock physics addresses the relationships between geophysicalobservations and the underlying physical properties of rocks, such as composition,porosity, and pore fluid content TheRock Physics Handbook distills a vast quantity

of background theory and laboratory results into a series of concise, self-containedchapters, which can be quickly accessed by those seeking practical solutions toproblems in geophysical data interpretation

In addition to the wide range of topics presented in the First Edition (includingwave propagation, effective media, elasticity, electrical properties, and pore fluidflow and diffusion), this Second Edition also presents major new chapters on granularmaterial and velocity–porosity–clay models for clastic sediments Other new andexpanded topics include anisotropic seismic signatures, nonlinear elasticity, wavepropagation in thin layers, borehole waves, models for fractured media, poroelasticmodels, attenuation models, and cross-property relations between seismic and electricalparameters This new edition also provides an enhanced set of appendices with keyempirical results, data tables, and an atlas of reservoir rock properties expanded toinclude carbonates, clays, and gas hydrates

Supported by a website hosting MATLAB routines for implementing the variousrock physics formulas presented in the book, the Second Edition ofThe Rock PhysicsHandbook is a vital resource for advanced students and university faculty, as well asin-house geophysicists and engineers working in the petroleum industry It will also

be of interest to practitioners of environmental geophysics, geomechanics, and energyresources engineering interested in quantitative subsurface characterization andmodeling of sediment properties

Gary Mavko received his Ph.D in Geophysics from Stanford University in 1977where he is now Professor (Research) of Geophysics Professor Mavko co-directs theStanford Rock Physics and Borehole Geophysics Project (SRB), a group of approxi-mately 25 researchers working on problems related to wave propagation in earthmaterials Professor Mavko is also a co-author ofQuantitative Seismic Interpretation(Cambridge University Press, 2005), and has been an invited instructor for numerousindustry courses on rock physics for seismic reservoir characterization He receivedthe Honorary Membership award from the Society of Exploration Geophysicists(SEG) in 2001, and was the SEG Distinguished Lecturer in 2006

Tapan Mukerjireceived his Ph.D in Geophysics from Stanford University in 1995 and

is now an Associate Professor (Research) in Energy Resources Engineering and

Trang 4

Mukerji co-directs the Stanford Center for Reservoir Forecasting (SCRF) focusing

on problems related to uncertainty and data integration for reservoir modeling Hisresearch interests include wave propagation and statistical rock physics, and hespecializes in applied rock physics and geostatistical methods for seismic reservoircharacterization, fracture detection, time-lapse monitoring, and shallow subsurfaceenvironmental applications Professor Mukerji is also a co-author of QuantitativeSeismic Interpretation, and has taught numerous industry courses He receivedthe Karcher award from the Society of Exploration Geophysicists in 2000

Jack Dvorkin received his Ph.D in Continuum Mechanics in 1980 from MoscowUniversity in the USSR He has worked in the Petroleum Industry in the USSR andUSA, and is currently a Senior Research Scientist with the Stanford Rock PhysicsProject at Stanford University Dr Dvorkin has been an invited instructor for numer-ous industry courses throughout the world, on rock physics and quantitative seismicinterpretation He is a member of American Geophysical Union, Society of Explor-ation Geophysicists, American Association of Petroleum Geologists, and the Society

of Petroleum Engineers

Trang 5

The Rock Physics

Tools for Seismic Analysis of Porous Media

Trang 6

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,

São Paulo, Delhi, Dubai, Tokyo

Cambridge University Press

The Edinburgh Building, Cambridge CB2 8RU, UK

First published in print format

ISBN-13 978-0-521-86136-6

ISBN-13 978-0-511-65062-8

© G Mavko, T Mukerji, and J Dvorkin 2009

2009

Information on this title: www.cambridge.org/9780521861366

This publication is in copyright Subject to statutory exception and to the

provision of relevant collective licensing agreements, no reproduction of any partmay take place without the written permission of Cambridge University Press

Cambridge University Press has no responsibility for the persistence or accuracy

of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain,

accurate or appropriate

Published in the United States of America by Cambridge University Press, New Yorkwww.cambridge.org

eBook (NetLibrary)Hardback

Trang 7

2.8 Strain components and equations of motion in cylindrical and

v

Trang 8

3.3 NMO in isotropic and anisotropic media 86

3.5 Reflectivity and amplitude variations with offset (AVO) in isotropic media 96

3.11 Waves in layered media: stratigraphic filtering and velocity dispersion 134

3.12 Waves in layered media: frequency-dependent anisotropy, dispersion,

Trang 9

5.3 Particle size and sorting 242

5.4 Random spherical grain packings: contact models and

6.4 Brown and Korringa’s generalized Gassmann equations for

6.18 Partial saturation: White and Dutta–Ode´ model for velocity

6.19 Velocity dispersion, attenuation, and dynamic permeability in

vii Contents

Trang 10

7.5 Velocity–porosity–clay models: Han’s empirical relations

7.6 Velocity–porosity–clay models: Tosaya’s empirical relations for

9.5 Cross-property bounds and relations between elastic and

Trang 11

A.6 Physical properties of common gases 468

Trang 13

Preface to the Second Edition

In the decade since publication of theRock Physics Handbook, research and use ofrock physics has thrived We hope that the First Edition has played a useful role inthis era by making the scattered and eclectic mass of rock physics knowledge moreaccessible to experts and nonexperts, alike

While preparing this Second Edition, our objective was still to summarize in aconvenient form many of the commonly needed theoretical and empirical relations ofrock physics Our approach was to presentresults, with a few of the key assumptionsand limitations, and almost never any derivations Our intention was to create a quickreference and not a textbook Hence, we chose to encapsulate a broad range of topicsrather than to give in-depth coverage of a few Even so, there are many topics that wehave not addressed While we have summarized the assumptions and limitations ofeach result, we hope that the brevity of our discussions does not give the impressionthat application of any rock physics result to real rocks is free of pitfalls We assumethat the reader will be generally aware of the various topics, and, if not, we provide afew references to the more complete descriptions in books and journals

The handbook contains 101 sections on basic mathematical tools, elasticity theory,wave propagation, effective media, elasticity and poroelasticity, granular media, andpore-fluid flow and diffusion, plus overviews of dispersion mechanisms, fluid substi-tution, andVP–VS relations The book also presents empirical results derived fromreservoir rocks, sediments, and granular media, as well as tables of mineral data and

an atlas of reservoir rock properties The emphasis still focuses on elastic and seismictopics, though the discussion of electrical and cross seismic-electrical relations hasgrown An associated website (http://srb.stanford.edu/books) offers MATLAB codesfor many of the models and results described in the Second Edition

In this Second Edition,Chapter 2has been expanded to include new discussions onelastic anisotropy including the Kelvin notation and eigenvalues for stiffnesses,effective stress behavior of rocks, and stress-induced elasticity anisotropy.Chapter 3

includes new material on anisotropic normal moveout (NMO) and reflectivity,amplitude variation with offset (AVO) relations, plus a new section on elasticimpedance (including anisotropic forms), and updates on wave propagation in strati-fied media, and borehole waves Chapter 4 includes updates of inclusion-basedeffective media models, thinly layered media, and fractured rocks.Chapter 5containsxi

Trang 14

extensive new sections on granular media, including packing, particle size, sorting,sand–clay mixture models, and elastic effective medium models for granular mater-ials.Chapter 6expands the discussion of fluid effects on elastic properties, includingfluid substitution in laminated media, and models for fluid-related velocity dispersion

in heterogeneous poroelastic media.Chapter 7 contains new sections on empiricalvelocity–porosity–mineralogy relations, VP–VS relations, pore-pressure relations,static and dynamic moduli, and velocity–strength relations Chapter 8 has newdiscussions on capillary effects, irreducible water saturation, permeability, and flow

in fractures.Chapter 9includes new relations between electrical and seismic ties.The Appendiceshas new tables of physical constants and properties for commongases, ice, and methane hydrate

proper-This Handbook is complementary to a number of other excellent books For depth discussions of specific rock physics topics, we recommendFundamentals ofRock Mechanics, 4th Edition, by Jaeger, Cook, and Zimmerman; Compressibility

in-of Sandstones, by Zimmerman; Physical Properties in-of Rocks: Fundamentals andPrinciples of Petrophysics, by Schon; Acoustics of Porous Media, by Bourbie´, Coussy,and Zinszner;Introduction to the Physics of Rocks, by Gue´guen and Palciauskas;

A Geoscientist’s Guide to Petrophysics, by Zinszner and Pellerin; Theory of LinearPoroelasticity, by Wang; Underground Sound, by White; Mechanics of CompositeMaterials, by Christensen; The Theory of Composites, by Milton; Random Heteroge-neous Materials, by Torquato; Rock Physics and Phase Relations, edited by Ahrens;andOffset Dependent Reflectivity – Theory and Practice of AVO Analysis, edited byCastagna and Backus For excellent collections and discussions of classic rock physicspapers we recommendSeismic and Acoustic Velocities in Reservoir Rocks, Volumes 1,

2 and 3, edited by Wang and Nur;Elastic Properties and Equations of State, edited

by Shankland and Bass; Seismic Wave Attenuation, by Tokso¨z and Johnston; andClassics of Elastic Wave Theory, edited by Pelissier et al

We wish to thank the students, scientific staff, and industrial affiliates of theStanford Rock Physics and Borehole Geophysics (SRB) project for many valuablecomments and insights While preparing the Second Edition we found discussions withTiziana Vanorio, Kaushik Bandyopadhyay, Ezequiel Gonzalez, Youngseuk Keehm,Robert Zimmermann, Boris Gurevich, Juan-Mauricio Florez, Anyela Marcote-Rios,Mike Payne, Mike Batzle, Jim Berryman, Pratap Sahay, and Tor Arne Johansen, to beextremely helpful Li Teng contributed to the chapter on anisotropic AVOZ, andRan Bachrach contributed to the chapter on dielectric properties Dawn Burgesshelped tremendously with editing, graphics, and content We also wish to thank thereaders of the First Edition who helped us to track down and fix errata

And as always, we are indebted to Amos Nur, whose work, past and present, hashelped to make the field of rock physics what it is today

Gary Mavko, Tapan Mukerji, and Jack Dvorkin

Trang 15

A functionE(x) is even if E(x)¼ E(–x) A function O(x) is odd if O(x) ¼ –O(–x).The Fourier transform has the following properties for even and odd functions:

 Even functions The Fourier transform of an even function is even A real evenfunction transforms to areal even function An imaginary even function transforms

to animaginary even function

 Odd functions The Fourier transform of an odd function is odd A real odd functiontransforms to animaginary odd function An imaginary odd function transforms to

areal odd function (i.e., the “realness” flips when the Fourier transform of an oddfunction is taken)

real even (RE)! real even (RE)

imaginary even (IE)! imaginary even (IE)

real odd (RO)! imaginary odd (IO)

imaginary odd (IO)! real odd (RO)

Any function can be expressed in terms of its even and odd parts:

Trang 16

Then, for an arbitrary complex function we can summarize these relations as(Bracewell,1965)

fðxÞ ¼ reðxÞ þ i ieðxÞ þ roðxÞ þ i ioðxÞ

FðxÞ ¼ REðsÞ þ i IEðsÞ þ ROðsÞ þ i IOðsÞ

As a consequence, a real function f(x) has a Fourier transform that is hermitian,F(s)¼ F*(–s), where * refers to the complex conjugate

For a more general complex function, f(x), we can tabulate some additionalproperties (Bracewell,1965):

f *(x)★ f(x) is called the autocorrelation of f(x).

Trang 17

Phase spectrum

The Fourier transform F(s) is most generally a complex function, which can bewritten as

FðsÞ ¼ jFjei’¼ Re FðsÞ þ i Im FðsÞ

wherejFj is the modulus and ’ is the phase, given by

’ ¼ tan1½Im FðsÞ=Re FðsÞ

The function’(s) is sometimes also called the phase spectrum

Obviously, both the modulus and phase must be known to completely specify theFourier transformF(s) or its transform pair in the other domain, f(x) Consequently, aninfinite number of functionsf(x), F(s) are consistent with a given spectrum jF(s)j2

.The zero-phase equivalent function (or zero-phase equivalent wavelet) corres-ponding to a given spectrum is

FðsÞ ¼ jFðsÞj

fðxÞ ¼

1jFðsÞj eþi2pxsdswhich implies thatF(s) is real and f(x) is hermitian In the case of zero-phase realwavelets, then, bothF(s) and f(x) are real even functions

The minimum-phase equivalent function or wavelet corresponding to a spectrum is theunique one that is bothcausal and invertible A simple way to compute the minimum-phase equivalent of a spectrumjF(s)j2

is to perform the following steps (Claerbout,1992):(1) Take the logarithm,B(s)¼ ln jF(s)j

(2) Take the Fourier transform,B(s)) b(x)

(3) Multiplyb(x) by zero for x< 0 and by 2 for x > 0 If done numerically, leave thevalues ofb at zero and the Nyquist frequency unchanged

(4) Transform back, givingB(s) + i’(s), where ’ is the desired phase spectrum.(5) Take the complex exponential to yield the minimum-phase function: Fmp(s) =exp[B(s)þ i’(s)] ¼ jF(s)jei ’(s).

(6) The causal minimum-phase wavelet is the Fourier transform ofFmp(s)) fmp(x).Another way of saying this is that the phase spectrum of the minimum-phaseequivalent function is the Hilbert transform (seeSection 1.2on the Hilbert transform)

of the log of the energy spectrum

Sampling theorem

A functionf(x) is said to be band limited if its Fourier transform is nonzero onlywithin a finite range of frequencies,jsj < sc, wherescis sometimes called thecut-offfrequency The function f(x) is fully specified if sampled at equal spacing notexceeding Dx¼ 1/(2sc) Equivalently, a time series sampled at interval Dt adequatelydescribes the frequency components out to the Nyquist frequency f ¼ 1/(2Dt)

3 1.1 The Fourier transform

Trang 18

The numerical process to recover the intermediate points between samples is toconvolve with thesinc function:

2scsinc 2sð cxÞ ¼ 2scsin p2sð cxÞ=p2scx

where

sincðxÞ sinðpxÞ

pxwhich has the properties:

sincð0Þ ¼ 1

sincðnÞ ¼ 0



n¼ nonzero integerThe Fourier transform of sinc(x) is the boxcar function(s):

Numerical details

Consider a band-limited function g(t) sampled at N points at equal intervals: g(0),g(Dt), g(2Dt), , g((N – 1)Dt) A typical fast Fourier transform (FFT) routine willyieldN equally spaced values of the Fourier transform, G( f ), often arranged as

Trang 19

time domain sample rate Dt

Nyquist frequencyfN¼ 1/(2Dt)

frequency domain sample rate Df¼ 1/(NDt)

Note that, because of “wraparound,” the sample at (N/2þ 1) represents both fN.Spectral estimation and windowing

It is often desirable in rock physics and seismic analysis to estimate the spectrum of

a wavelet or seismic trace The most common, easiest, and, in some ways, the worstway is simply to chop out a piece of the data, take the Fourier transform, and findits magnitude The problem is related to sample length If the true data function isf(t),

a small sample of the data can be thought of as

More generally, we can “window” the sample with some other function o(t):

yielding

Thus, the estimated spectrum can be highly contaminated by the Fourier transform

of the window, often with the effect of smoothing and distorting the spectrum due tothe convolution with the window spectrumW(s) This can be particularly severe inthe analysis of ultrasonic waveforms in the laboratory, where often only the first 1 to

112 cycles are included in the window The solution to the problem is not easy, andthere is an extensive literature (e.g., Jenkins and Watts, 1968; Marple, 1987) onspectral estimation Our advice is to be aware of the artifacts of windowing and toexperiment to determine the sensitivity of the results, such as the spectral ratio or thephase velocity, to the choice of window size and shape

Fourier transform theorems

Tables 1.1.1and1.1.2summarize some useful theorems (Bracewell,1965) Iff(x) hasthe Fourier transformF(s), and g(x) has the Fourier transform G(s), then the Fourier

5 1.1 The Fourier transform

Trang 20

transform pairs in the x-domain and the s-domain are as shown in the tables.

Table 1.1.3lists some useful Fourier transform pairs

FHi¼  1

px f ðxÞThe Fourier transform of (–1/px) is (i sgn(s)), that is,þi for positive s and –i fornegative s Hence, applying the Hilbert transform keeps the Fourier amplitudes orspectrum the same but changes the phase Under the Hilbert transform, sin(kx) isconverted to cos(kx), and cos(kx) is converted to –sin(kx) Similarly, the Hilberttransforms of even functions are odd functions and vice versa

Table 1.1.1 Fourier transform theorems

jajF

s a

þ1

2F sþ ! 2p

Z 1

1 f ðxÞ gðxÞ dx ¼

Z 1

1 FðsÞ GðsÞ ds

Trang 21

The inverse of the Hilbert transform is itself the Hilbert transform with a change

As discussed below, the Fourier transform ofS(t) is zero for negative frequencies

Table 1.1.3 Some Fourier transform pairs

Trang 22

The instantaneous envelope of the analytic signal is

Claerbout (1992) has suggested that o can be numerically more stable if thedenominator is rationalized and the functions are locally smoothed, as in the followingequation:

35where〈·〉 indicates some form of running average or smoothing

Trang 23

Similarly, if we reverse the domains, an analytic signal of the form

SðtÞ ¼ f ðtÞ  iFHiðtÞ

must have a Fourier transform that is zero for negative frequencies In fact, one venient way to implement the Hilbert transform of a real function is by performing thefollowing steps:

con-(1) Take the Fourier transform

(2) Multiply the Fourier transform by zero forf< 0

(3) Multiply the Fourier transform by 2 forf> 0

(4) If done numerically, leave the samples atf¼ 0 and the Nyquist frequency unchanged.(5) Take the inverse Fourier transform

The imaginary part of the result will be the negative Hilbert transform of the real part

2 ¼1

n

Xn i¼1

ðxi mÞ2

(An unbiased estimate of the population variance is often found by dividing the sumgiven above by (n – 1) instead of by n.)

The standard deviation, s, is the square root of the variance, while the coefficient

of variation is s/m The mean deviation, a, is

When trying to determine whether two different data variables,x and y, are related,

we often estimate the correlation coefficient, r, given by (e.g., Young,1962)

Trang 24

where sxand sy are the standard deviations of the two distributions andmxandmy

are their means The correlation coefficient gives a measure of how close the pointscome to falling along a straight line in a scatter plot ofx versus y.jrj ¼ 1 if the pointslie perfectly along a line, andjrj < 1 if there is scatter about the line The numerator

of this expression is the sample covariance, Cxy, which is defined as

Cxy¼1

n

Xn i¼1

ðxi mxÞðyi myÞ

It is important to remember that the correlation coefficient is a measure of thelinearrelation between x and y If they are related in a nonlinear way, the correlationcoefficient will be misleadingly small

The simplest recipe for estimating the linear relation between two variables,x and y,

is linear regression, in which we assume a relation of the form:

ðyi^yiÞ2

The square of the correlation coefficient r is the coefficient of determination, oftendenoted by r2, which is a measure of the regression variance relative to the totalvariance in the variabley, expressed as

Trang 25

r2¼ 2 ¼ 1  variance ofy around the linear regression

a regression of the form

x¼ a0yþ b0

Generally a6¼ 1/a0 unless the data are perfectly correlated In fact, the correlationcoefficient, r, can be written as ¼pffiffiffiffiffiffiaa0

.The coefficients of the linear regression among three variables of the form

3775where thek sets of independent variables form columns 2:(nþ 1) in the matrix M:

11 1.3 Statistics and probability

Trang 26

Variogram and covariance function

In geostatistics, variables are modeled as random fields,X(u), where u is the spatialposition vector Spatial correlation between two random fields X(u) and Y(u) isdescribed by the cross-covariance functionCXY(h), defined by

CXYðhÞ ¼ Ef½XðuÞ  mXðuÞ½Yðu þ hÞ  mYðu þ hÞg

whereE{} denotes the expectation operator, mXandmYare the means ofX and Y, and h iscalled the lag vector For stationary fields,mXandmYare independent of position When

X and Y are the same function, the equation represents the auto-covariance function

CXX(h) A closely related measure of two-point spatial variability is the semivariogram,g(h) For stationary random fields X(u) and Y(u), the cross-variogram 2gXY(h) is defined as2gXYðhÞ ¼ Ef½Xðu þ hÞ  XðuÞ½Yðu þ hÞ  YðuÞg

When X and Y are the same, the equation represents the variogram of X(h) For

a stationary random field, the variogram and covariance function are related by

fN ; pðnÞ ¼ N

n

 

pnð1  pÞNnThe mean of the binomial distribution is given by

mb¼ Np

and the variance of the binomial distribution is given by

2 ¼ Npð1  pÞ

Trang 27

The Poisson distribution is the limit of the binomial distribution as N! 1 and

p! 0 so that l ¼ Np remains finite The Poisson distribution is given by

flð Þ ¼n lnel

n!The Poisson distribution is a discrete probability distribution and expresses theprobability ofn events occurring during a given interval of time if the events have anaverage (positive real) rate l, and the events are independent of the time since theprevious event.n is a non-negative integer

The mean of the Poisson distribution is given by

 ¼jb  ajffiffiffiffiffi

12p

The Gaussian or normal distribution is given by

fðxÞ ¼ 1

pffiffiffiffiffiffi2peðxmÞ2=22where s is the standard deviation and m is the mean The mean deviation for theGaussian distribution is

a¼ 

ffiffiffi2pr

Whenm measurements are made of n quantities, the situation is described by then-dimensional multivariate Gaussian probability density function (pdf ):

Trang 28

wherexT¼ (x1,x2, ,xn) is the vector of observations,mT¼ (m1,m2, ,mn) is thevector of means of the individual distributions, and C is the covariance matrix:

n flooding surfaces is

Pl Dð Þ ¼n ðl DÞ

n

n! expðl DÞThe mean number of occurrences is lD, where l is the mean number of occurrencesper unit length The mean thickness between events isD= lDð Þ ¼ 1=l The intervalthicknessesd between flooding events are governed by the truncated exponential

Trang 29

The logistic distribution is a continuous distribution with probability densityfunction given by

PðxÞ ¼

k l x l

Monte Carlo simulations

Statistical simulation is a powerful numerical method for tackling many probabilisticproblems One of the steps is to draw samplesXifrom a desired probability distribu-tion function F(x) This procedure is often called Monte Carlo simulation, a termmade popular by physicists working on the bomb during the Second World War

In general, Monte Carlo simulation can be a very difficult problem, especially when

X is multivariate with correlated components, and F(x) is a complicated function.For the simple case of a univariateX and a completely known F(x) (either analytically

or numerically), drawing Xi amounts to first drawing uniform random variates

Uibetween 0 and 1, and then evaluating the inverse of the desired cumulative distributionfunction (CDF) at theseUi:Xi¼ F–1

(Ui) The inverse of the CDF is called the quantilefunction WhenF1(X) is not known analytically, the inversion can be easily done by

15 1.3 Statistics and probability

Trang 30

table-lookup and interpolation from the numerically evaluated or nonparametric CDFderived from data A graphical description of univariate Monte Carlo simulation isshown inFigure 1.3.1.

Many modern computer packages have random number generators not only foruniform and normal (Gaussian) distributions, but also for a large number of well-known,analytically defined statistical distributions

Often Monte Carlo simulations require simulating correlated random variables(e.g.,VP, VS) Correlated random variables may be simulated sequentially, makinguse of the chain rule of probability, which expresses the joint probability density interms of the conditional and marginal densities:P(VP,VS)¼ P(VSjVP)P(VP)

A simple procedure for correlated Monte Carlo draws is as follows:

 draw aVPsample from theVPdistribution;

 compute aVSfrom the drawnVPand theVP–VSregression;

 add to the computedVSa random Gaussian error with zero mean and variance equal

to the variance of the residuals from theVP–VSregression

This gives a random, correlated (VP,VS) sample A better approach is to drawVSfromthe conditional distributions ofVSfor each givenVPvalue, instead of using a simple

VP–VSregression Given sufficientVP–VS training data, the conditional distributions

ofVSfor differentVPcan be computed

Bootstrap

“Bootstrap” is a very powerful computational statistical method for assigning measures

of accuracy to statistical estimates (e.g., Efron and Tibshirani,1993) The general idea

is to make multiple replicates of the data by drawing from the original data withreplacement Each of the bootstrap data replicates has the same number of samples

as the original data set, but since they are drawn with replacement, some of the datamay be represented more than once in the replicate data sets, while others might be

Trang 31

missing Drawing with replacement from the data is equivalent to Monte Carlorealizations from the empirical CDF The statistic of interest is computed on all ofthe replicate bootstrap data sets The distribution of the bootstrap replicates of thestatistic is a measure of uncertainty of the statistic.

Drawing bootstrap replicates from the empirical CDF in this way is sometimestermed nonparametric bootstrap In parametric bootstrap the data are first mod-eled by a parametric CDF (e.g., a multivariate Gaussian), and then bootstrap datareplicates are drawn from the modeled CDF Both simple bootstrap techniquesdescribed above assume the data are independent and identically distributed Moresophisticated bootstrap techniques exist that can account for data dependence

Statistical classification

The goal in statistical classification problems is to predict the class of an unknownsample based on observed attributes or features of the sample For example, theobserved attributes could be P and S impedances, and the classes could be lithofacies,such as sand and shale The classes are sometimes also called states, outcomes, orresponses, while the observed features are called the predictors Discussions concerningmany modern classification methods may be found in Fukunaga (1990), Dudaet al.(2000), Hastieet al (2001), and Bishop (2006)

There are two general types of statistical classification: supervised classification,which uses a training data set of samples for which both the attributes and classeshave been observed; and unsupervised learning, for which only the observedattributes are included in the data Supervised classification uses the training data

to devise a classification rule, which is then used to predict the classes for new data,where the attributes are observed but the outcomes are unknown Unsupervisedlearning tries to cluster the data into groups that are statistically different from eachother based on the observed attributes

A fundamental approach to the supervised classification problem is provided byBayesian decision theory Letx denote the univariate or multivariate input attributes,and letcj, ¼ 1, , N denote the N different states or classes The Bayes formulaexpresses the probability of a particular class given an observedx as

Pðcjj xÞ ¼Pðx; cjÞ

Pðx j cjÞ PðcjÞPðxÞwhereP(x, cj) denotes the joint probability ofx and cj;P(xj cj) denotes the conditionalprobability ofx given cj; andP(cj) is the prior probability of a particular class Finally,P(x) is the marginal or unconditional pdf of the attribute values across all N states

Trang 32

and serves as a normalization constant The class-conditional pdf,P(xj cj), is estimatedfrom the training data or from a combination of training data and forward models.The Bayes classification rule says:

classify as classck if Pðckj xÞ > Pðcjj xÞ for all j 6¼ k:

This is equivalent to choosingckwhenP(xj ck)P(ck)> P(x j cj)P(cj) for allj6¼ k.The Bayes classification rule is the optimal one that minimizes the misclassifica-tion error and maximizes the posterior probability Bayes classification requiresestimating the complete set of class-conditional pdfsP(xj cj) With a large number

of attributes, getting a good estimate of the highly multivariate pdf becomes difficult.Classification based on traditional discriminant analysis uses only the means andcovariances of the training data, which are easier to estimate than the complete pdfs.When the input features follow a multivariate Gaussian distribution, discriminantclassification is equivalent to Bayes classification, but with other data distributionpatterns, the discriminant classification is not guaranteed to maximize the posteriorprobability Discriminant analysis classifies new samples according to the minimumMahalanobis distance to each class cluster in the training data The Mahalanobisdistance is defined as follows:

of the feature vector When the covariance matrices for all the classes are taken to beidentical, the classification gives rise to linear discriminant surfaces in the featurespace More generally, with different covariance matrices for each category, thediscriminant surfaces are quadratic If the classes have unequal prior probabilities, theterm ln[P(class)j] is added to the right-hand side of the equation for the Mahalanobisdistance, whereP(class)jis the prior probability for thejth class Linear and quadraticdiscriminant classifiers are simple, robust classifiers and often produce good results,performing among the top few classifier algorithms

Synopsis

It is often necessary to transform vector and tensor quantities in one coordinatesystem to another more suited to a particular problem Consider two right-handrectangular Cartesian coordinates (x, y, z) and (x0, y0, z0) with the same origin, but

Trang 33

with their axes rotated arbitrarily with respect to each other The relative orientation

of the two sets of axes is given by the direction cosines bij, where each element isdefined as the cosine of the angle between the newi0-axis and the originalj-axis The

cosine of the angle between the 2-axis of the primed coordinate system and the 3-axis

of the unprimed coordinate system

The general transformation law for tensors is

where summation over repeated indices is implied The left-hand subscripts (A, B,

C, D, ) on the bs match the subscripts of the transformed tensor M0on the left, andthe right-hand subscripts (a, b, c, d, ) match the subscripts of M on the right Thusvectors, which are first-rank tensors, transform as

1CC

u1

u2

u3

0BB

1CC

whereas second-rank tensors, such as stresses and strains, obey

M and N, as explained below (Auld,1990):

½C0 ¼ ½M½C½MT

½S0 ¼ ½N½S½NT

19 1.4 Coordinate transformations

Trang 34

The elements of the 6

The advantage of the Bond method for transforming stiffnesses and compliances isthat it can be applied directly to the elastic constants given in 2-index notation, asthey almost always are in handbooks and tables

Assumptions and limitations

Coordinate transformations presuppose right-handed rectangular coordinate systems

Trang 35

2 Elasticity and Hooke’s law

Synopsis

In an isotropic, linear elastic material, the stress and strain are related by Hooke’s law

as follows (e.g., Timoshenko and Goodier,1934):

eij¼ elements of the strain tensor

ij ¼ elements of the stress tensor

eaa¼ volumetric strain (sum over repeated index)

aa¼ mean stress times 3 (sum over repeated index)

dij¼ 0 if i 6¼ j and dij¼ 1 if i ¼ j

In an isotropic, linear elastic medium, only two constants are needed to specify thestress–strain relation completely (for example, [l, m] in the first equation or [E, v],which can be derived from [l, m], in the second equation) Other useful and conveni-ent moduli can be defined, but they are always relatable to just two constants Thethree moduli that follow are examples

The Bulk modulus, K, is defined as the ratio of the hydrostatic stress, s0, to thevolumetric strain:

Trang 36

Occasionally in the literature authors have used the termincompressibility as analternate name for Lame´’s constant, l, even though lis not the reciprocal of thecompressibility

The shear modulus, m, is defined as the ratio of the shear stress to the shear strain:

P, defined as the ratio of the axial stress to the axial strain

in auniaxial strain state:

Although any one of the isotropic constants (l, m,K, M, E, and n) can be derived interms of the others, m and K have a special significance as eigenelastic constants(Mehrabadi and Cowin,1989) orprincipal elasticities of the material (Kelvin,1856).The stress and strain eigentensors associated with m andK are orthogonal, as discussed

inSection 2.2 Such an orthogonal significance does not hold for the pair l and m

Trang 37

Table 2.1.1 summarizes useful relations among the constants of linear isotropicelastic media.

Assumptions and limitations

The preceding equations assume isotropic, linear elastic media

Trang 38

reducing the number of independent constants to 36 In addition, the existence of aunique strain energy potential requires that

cijkl¼ cklij

further reducing the number of independent constants to 21 This is the maximumnumber of independent elastic constants that any homogeneous linear elastic mediumcan have Additional restrictions imposed by symmetry considerations reduce thenumber much further Isotropic, linear elastic materials, which have maximumsymmetry, are completely characterized by two independent constants, whereasmaterials with triclinic symmetry (the minimum symmetry) require all 21 constants.Alternatively, the strains may be expressed as a linear combination of the stresses

by the following expression:

eij¼ sijklkl

In this casesijklare elements of the elastic compliance tensor which has the samesymmetry as the corresponding stiffness tensor The compliance and stiffness aretensor inverses, denoted by

cijklsklmn¼ Iijmn¼1

The stiffness and compliance tensors must always be positive definite One way toexpress this requirement is that all of the eigenvalues of the elasticity tensor(described below) must be positive

Voigt notation

It is a standard practice in elasticity to use an abbreviated Voigt notation for thestresses, strains, and stiffness and compliance tensors, for doing so simplifies some ofthe key equations (Auld,1990) In this abbreviated notation, the stresses and strainsare written as six-element column vectors rather than as nine-element squarematrices:

377775

Note the factor of 2 in the definitions of strains, but not in the definition of stresses.With the Voigt notation, four subscripts of the stiffness and compliance tensors arereduced to two Each pair of indices ij(kl) is replaced by one index I(J) using thefollowing convention:

Trang 39

Note how the definition ofsIJdiffers from that ofcIJ This results from the factors

of 2 introduced in the definition of strains in the abbreviated notation Hence theVoigt matrix representation of the elastic stiffness is

and similarly, the Voigt matrix representation of the elastic compliance is

The Voigt stiffness and compliance matrices are symmetric The upper trianglecontains 21 constants, enough to contain the maximum number of independentconstants that would be required for the least symmetric linear elastic material.Using the Voigt notation, we can write Hooke’s law as

1CCCC

0BBBB

1CCCC

25 2.2 Anisotropic form of Hooke’s law

Trang 40

It is very important to note that the stress (strain) vector and stiffness (compliance)matrix in Voigt notation are not tensors.

Caution

Some forms of the abbreviated notation adopt different definitions of strains, ing the factors of 2 and 4 from the compliances to the stiffnesses However,the form given above is the more common convention In the two-index notation,

mov-cIJ and sIJ can conveniently be represented as 6

matrices no longer follow the laws of tensor transformation Care must be takenwhen transforming from one coordinate system to another One way is to go back

to the four-index notation and then use the ordinary laws of coordinate ation A more efficient method is to use the Bond transformation matrices,which are explained inSection 1.4on coordinate transformations

transform-Voigt stiffness matrix structure for common anisotropy classes

The nonzero components of the more symmetric anisotropy classes commonly used

in modeling rock properties are given below in Voigt notation

Isotropic: two independent constants

The structure of the Voigt elastic stiffness matrix for an isotropic linear elasticmaterial has the following form:

Ngày đăng: 14/05/2018, 15:11

TỪ KHÓA LIÊN QUAN