1. Trang chủ
  2. » Công Nghệ Thông Tin

APPLIED NUMERICAL METHODS USING MATLAB phần 5 pptx

51 485 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Applied Numerical Methods Using MATLAB phần 5 pptx
Trường học University of Danang - University of Science and Technology
Chuyên ngành Applied Numerical Methods
Thể loại Lecture slides presentation
Thành phố Da Nang
Định dạng
Số trang 51
Dung lượng 371,03 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

PROBLEMS 2034.7 Newton Method for Systems of Nonlinear Equations Apply the routine “newtons” Section 4.6 and the MATLAB built-inroutine “fsolve” with [x0 y0]= [1 0.5] to solve the follow

Trang 1

Figure P4.1 Iterative method based on the fixed-point theorem.

Noting that the first derivative of this iterative function g a (x)is

determine which solution attracts this iteration and certify it in Fig P4.1b

In addition, run the MATLAB routine “fixpt()” to carry out the

itera-tion (P4.1.5) with the initial points x0 = 0.2, x0= 1, and x0= 3 Whatdoes the routine yield for each initial point?

(cf) This illustrates that the outcome of an algorithm may depend on the ing point.

Trang 2

start-PROBLEMS 199

4.2 Bisection Method and Fixed-Point Iteration

Consider the nonlinear equation treated in Example 4.2

f (x) = tan(π − x) − x = 0 ( P4.2.1)

Two graphical solutions of this equation are depicted in Fig P4.2, whichcan be obtained by typing the following statements into the MATLABcommand window:

>>ezplot(’tan(pi-x)’,-pi/2,3*pi/2)

>>hold on, ezplot(’x+0’,-pi/2,3*pi/2)

(a) In order to use the bisection method for finding the solution between

1.5 and 3, Charley typed the statements shown below Could he get theright solution? If not, explain him why he failed and suggest him how

to make it

>>fp42 = inline(’tan(pi-x)-x’,’x’);

>>TolX = 1e-4; MaxIter = 50;

>>x = bisct(fp42,1.5,3,TolX,MaxIter)

(b) In order to find some interval to which the bisection method is

applica-ble, Jessica used the MATLAB command “find()” as shown below

might use the bisection method to find a solution between 1.5 and 2.0

by typing the following command

>>x=bisct(fp42,1.5,2,TolX,MaxIter)

Check the validity of the solution—that is, check if f (x)= 0 or not—bytyping

>>fp42(x)

If her solution is not good, explain the reason If you are not sure about

it, you can try plotting the graph in Fig P4.2 by typing the followingstatements into the MATLAB command window

>>x = [-pi/2+0.05:0.05:3*pi/2 - 0.05];

>>plot(x,tan(pi - x),x,x)

Trang 3

Figure P4.2 The graphical solutions of tan(π − x) − x = 0 or tan(π − x) = x.

(cf) This helps us understand why fzero(fp42,1.8) leads to the wrong tion even without any warning message as mentioned in Example 4.2.

solu-(c) In order to find the solution around x = 2.0 by using the fixed-point iteration with the initial point x0 = 2.0, Vania defined the iterative func-

Trang 4

PROBLEMS 201

4.3 Recursive (Self-Calling) Routine for Bisection Method

As stated in Section 1.3, MATLAB allows us to make nested (recursive) tines which call itself Modify the MATLAB routine “bisct()” (in Section4.2) into a nested routine “bisct_r()” and run it to solve Eq (P4.2.1)

rou-4.4 Newton Method and Secant Method

As can be seen in Fig 4.5, the secant method introduced in Section 4.5was devised to remove the necessity of the derivative/gradient and improvethe convergence But, it sometimes turns out to be worse than the Newtonmethod Apply the routines “newton()” and “secant()” to solve

f p44(x) = x3− x2− x + 1 = 0 ( P4.4) starting with the initial point x0 = −0.2 one time and x0= −0.3 for another

shot

4.5 Acceleration of Aitken–Steffensen Method

A sequence converging to a limit x o can be described as

In order to think about how to improve the convergence speed of this

sequence, we define a new sequence p k as

x o − x k+1

x o − x k ≈ A ≈ x o − x k

x o − x k−1; (x

o − x k+1)(x o − x k−1) ≈ (x o − x k )2(x o )2− x k+1x o − x k−1x o + x k+1x k−1≈ (x o )2− 2x o x k + x2

Trang 5

Table P4.5 Comparison of Various Methods Applied for Solving Nonlinear Equations

Newton Secant Steffensen Schroder fzero() fsolve()

(b) Modify the routine “newton()” into a routine “stfns()” that generatesthe sequence (P4.5.2) and run it to solve

f42(x) = tan(π − x) − x = 0 ( with x0= 1.6) (P4.5.4)

f p44(x) = x3− x2− x + 1 = 0 (with x0= 0) (P4.5.5)

f p45(x) = (x − 5)4 = 0 ( with x0 = 0) (P4.5.6)Fill in Table P4.5 with the results and those obtained by using theroutines “newton()”, “secant()” (with the error tolerance TolX =

10−5), “fzero()”, and “fsolve()”

4.6 Acceleration of Newton Method for Multiple Roots: Schroder Method

In order to improve the convergence speed, Schroder modifies the Newtoniterative algorithm (4.4.2) as

x k+1= x k − M f (x k )

with M : the order of multiplicity of the root we want to find

Based on this idea, modify the routine “newton()” into a routine

“schroder()” and run it to solve Eqs (P4.5.4.6) Fill in the correspondingblanks of Table P4.5 with the results

Trang 6

PROBLEMS 203

4.7 Newton Method for Systems of Nonlinear Equations

Apply the routine “newtons()” (Section 4.6) and the MATLAB built-inroutine “fsolve()” (with [x0 y0]= [1 0.5]) to solve the following systems

of equations Fill in Table P4.7 with the results

4.8 Newton Method for Systems of Nonlinear Equations

Apply the routine “newtons()” (Section 4.6) and the MATLAB built-inroutine “fsolve()” (with [x0 y0 z0]= [1 1 1]) to solve the followingsystems of equations Fill in Table P4.8 with the results

Trang 7

Table P4.7 Applying newtons()/fsolve() for Systems of Nonlinear Equations

Trang 9

In order to find the average modulation order x i for each user of an OFDM

(orthogonal frequency division multiplex) system that has N (128)

subcha-nnels to assign to each of the four users in the environment of noise power

N0 and the bit error rate (probability of bit error) P e, a communicationsystem expert, Mi-hyun, formulated the problem into the system of fivenonlinear equations as follows:

function y = fp_bits(x,a,Pe)

%x(i),i = 1:4 correspond to the modulation order of each user

%x(5) corresponds to the Lagrange multiplier (Lambda)

Compose a program which solves the above system of nonlinear equations

(with N0 = 1 and P e= 10−4) to get the modulation order x

i of each user

Trang 10

PROBLEMS 207

for five different sets of data rates

a= [32 32 32 32], [64 32 32 32], [128 32 32 32], [256 32 32 32], and [512 32 32 32] and plots a1/x1 (the number of subchannels assigned to user 1) versus a1

(the data rate of user 1)

4.10 Temperature Rising from Heat Flux in a Semi-infinite Slab

Consider a semi-infinite slab whose temperature rises as a function of

posi-tion x > 0 and time t > 0 as

where the function erfc() is defined by Eq (P4.9.3) and

Q ( heat flux) = 200 J/m2s, k ( conductivity) = 0.015 J/m/s/C,

a ( diffusivity) = 2.5 × 10−5 m2

/s

In order to find the heat transfer speed, a heating system expert,

Kyung-won, wants to solve the above equation to get the positions x(t) with a temperature rise of T = 30◦C at t = 10:10:200 s Compose the program

which does this job and plots x(t) versus t.

4.11 Damped Newton Method for a Set of Nonlinear Equations

Consider the routine “newtons()”, which is made for solving a system ofequations and introduced in Section 4.6

(a) Run the routine with the initial point (x10, x20) = (0.5, 0.2) to solve

Eq (4.6.5) and certify that it does not yield the right solution as depicted

in Fig 4.6c

(b) In order to keep the step size adjusted in the case where the norm of the

vector function f(x k+1) at iteration k + 1 is larger than that of f(x k )at

iteration k, insert (activate) the statements numbered from 1 to 6 of the

routine “newtons()” (Section 4.6) by deleting the comment mark (%) atthe beginning of each line to make a modified routine “newtonds()”,which implements the damped Newton method Run it with the initial

point (x10, x20) = (0.5, 0.2) to solve Eq (4.6.5) and certify that it yields

the right solution as depicted in Fig 4.6d

(c) Run the MATLAB built-in routine “fsolve()” with the initial point

(x10, x20) = (0.5, 0.2) to solve Eq (4.6.5) Does it present you a right

solution?

Trang 11

NUMERICAL DIFFERENTIATION/

INTEGRATION

For a function f (x) of a variable x, its first derivative is defined as

f(x)= lim

h→0

f (x + h) − f (x)

However, this gives our computers a headache, since they do not know how

to take a limit Any input number given to computers must be a definite ber and can be neither too small nor too large to be understood by the com-

num-puter The ‘theoretically’ infinitesimal number h involved in this equation is a

How far away is this approximation from the true value of (5.1.1)? In order to do

the error analysis, we take the Taylor series expansion of f (x + h) about x as

f (x + h) = f (x) + hf(x)+ h2

2f

( 2)

(x)+ h33!f

( 3)

(x)+ · · · ( 5.1.3)

Applied Numerical Methods Using MATLAB, by Yang, Cao, Chung, and Morris

Copyr ight  2005 John Wiley & Sons, I nc., ISBN 0-471-69833-4

209

Trang 12

210 NUMERICAL DIFFERENTIATION/ INTEGRATION

Subtracting f (x) from both sides and dividing both sides by the step size h yields

propor-Now, in order to derive another approximation formula for the first derivative

having a smaller error, let’s remove the first-order term with respect to h from

Eq (5.1.4) by substituting 2h for h in the equation

( 3)

(x)+ · · ·and subtracting this result from two times the equation Then, we get

which can be regarded as an improvement over Eq (5.1.4), since it has the

truncation error of O(h2)for|h| ≺ 1.

How about the backward difference approximation?

D b (x, h)= f (x) − f (x − h)

h ≡ D f1(x, −h) (h is step size) ( 5.1.6) This also has an error of O(h) and can be processed to yield an improved version having a truncation error of O(h2)

In order to derive another approximation formula for the first derivative, we

take the Taylor series expansion of f (x + h) and f (x − h) up to the fifth order

Trang 13

to write

f (x + h) = f (x) + hf(x)+h2

2f

( 2) (x)+h33!f

( 3) (x)+h4

4!f

( 4) (x)+h55!f

( 5) (x)+ · · ·

f (x − h) = f (x) − hf(x)+h2

2f

( 2) (x)h33!f

( 3) (x)+h4

4!f

( 4) (x)h55!f

( 5) (x)+ · · ·

and divide the difference between these two equations by 2h to get the central

difference approximation for the first derivative as

which has an error of O(h2)similarly to Eqs (5.1.5) and (5.1.7) This can also be

processed to yield an improved version having a truncation error of O(h4)

Furthermore, this procedure can be formularized into a general formula, called

‘Richardson’s extrapolation’, for improving the difference approximation of thederivatives as follows:

In the previous section, we derived some difference approximation formulasfor the first derivative Since their errors are proportional to some power of

Trang 14

212 NUMERICAL DIFFERENTIATION/ INTEGRATION

the step-size h, it seems that the errors continue to decrease as h gets smaller.

However, this is only half of the story since we considered only the truncationerror caused by truncating the high-order terms in the Taylor series expansionand did not take account of the round-off error caused by quantization

In this section, we will discuss the round-off error as well as the truncationerror so as to gain a better understanding of how the computer really works Forthis purpose, suppose that the function values

where the magnitudes of the round-off (quantization) errors e2, e1, e0, e−1, and

e−2are all smaller than some positive number ε, that is, |e i | ≤ ε Then, the total

error of the forward difference approximation (5.1.4) can be derived as

Look at the right-hand side of this inequality—that is, the upper bound of error

It consists of two parts; the first one is due to the round-off error and in inverse

proportion to the step-size h, while the second one is due to the truncation error and in direct proportion to h Therefore, the upper bound of the total error can

be minimized with respect to the step-size h to give the optimum step-size h o as

as follows:

D c (x, h)= y1− y−1

2h = f (x + h) + e1− f (x − h) − e−1

2h ( 5.1.8)

= f(x)+e1− e−1

2h +K2

6 h2

Trang 15

The right-hand side of this inequality is minimized to yield the optimum step

From what we have seen so far, we can tell that, as we make the step size h

smaller, the round-off error may increase, while the truncation error decreases.This is called ‘step-size dilemma’ Therefore, there must be some optimal step

size h o for the difference approximation formulas, as derived analytically inEqs (5.2.2), (5.2.3), and (5.2.4) However, these equations are only of theoretical

value and cannot be used practically to determine h o because we usually don’thave any information about the high-order derivatives and, consequently, we

cannot estimate K1, K2, Besides, noting that h ominimizes not the real error,but its upper bound, we can never expect the true optimal step size to be uniform

for all x even with the same approximation formula.

Now, we can verify the step-size dilemma and the existence of some optimal

step size h o by computing the numerical derivative of a function, say, f (x)=

sin x, whose analytical derivatives are well known To see how the errors of the difference approximation formulas (5.1.4) and (5.1.8) depend on the step size h,

we computed their values for x = π/4 together with their errors as summarized

in Tables 5.1 and 5.2 From these results, it appears that the errors of (5.1.4) and

(5.1.8) are minimized with h≈ 10−8 and h≈ 10−5, respectively This may bejustified by the following facts:

ž Noting that the number of significant bits is 52, which is the number of tissa bits (Section 1.2.1), or, equivalently, the number of significant digits

man-is about 52× 3/10 ≈ 16 (since 210≈ 103), and the value of f (x) = sin x is

less than or equal to one, the round-off error is roughly

ε≈ 10−16/2

Trang 16

214 NUMERICAL DIFFERENTIATION/ INTEGRATION

Table 5.1 The Forward Difference Approximation (5.1.4) for the First Derivative of f(x)=

sin x and Its Error from the True Value (cos π/4 = 0.7071067812) Depending on the Step Size h

h o = 0.0000000168 (the optimal value of h obtained from Eq (5.2.2))

Table 5.2 The Forward Difference Approximation (5.1.8) for the First Derivative of f(x)=

sin x and Its Error from the True Value (cos π/4 = 0.7071067812) Depending on the Step Size h

h o = 0.0000059640 (the optimal value of h obtained from Eq (5.2.3))

ž Accordingly, Eqs (5.2.2) and (5.2.3) give the theoretical optimal values of

Trang 17

10 −10 ho h 100

(a) Error bound of Eq (5.1.4) vs step size h

(b) Error bound of Eq (5.1.8) vs step size h

Figure 5.1 Forward/central difference approximation error of first derivative versus step size h.

Figure 5.1a/b shows how the error bounds of the difference approximations

(5.1.4)/(5.1.8) for the first derivative vary with the step-size h, implying that there

is some optimal value of step-size h with which the error bound of the numerical

derivative is minimized It seems that we might be able to get the optimal

step-size h oby using this kind of graph or directly using Eq (5.2.2),(5.2.3) or (5.2.4).But, as mentioned before, it is not possible, as long as the high-order derivativesare unknown (as is usually the case) Very fortunately, Tables 5.1 and 5.2 sug-

gest that we might be able to guess the good value of h by watching how

small |D ik − D i(k −1)| is for a given problem On the other hand, Fig 5.2a/bshows the tangential lines based on the forward/central difference approximations

(5.1.4)/(5.1.8) of the first derivative at x = π/4 with the three values of size h They imply that there is some optimal step-size h o and the numerical

step-approximation error becomes larger if we make the step-size h larger or smaller

than the value

Trang 18

216 NUMERICAL DIFFERENTIATION/ INTEGRATION

AND HIGHER DERIVATIVE

In order to obtain an approximation formula for the second derivative, we take

the Taylor series expansion of f (x + h) and f (x − h) up to the fifth order to

( 5) (x)+ · · ·

Adding these two equations (to remove the f(x) terms) and then subtracting

2f (x) from both sides and dividing both sides by h2 yields the central differenceapproximation for the second derivative as

( 6)

(x)+ · · · (5.3.1)

which has a truncation error of O(h2)

Richardson’s extrapolation can be used for manipulating this equation to

remove the h2 term, which yields an improved version

which has a truncation error of O(h4)

The difference approximation formulas for the first and second derivativesderived so far are summarized in Table 5.3, where the following notations areused:

D (N ) f i /D (N ) bi /D ci (N )is the forward/backward/central difference approximation for

the N th derivative having an error of O(h i ) (h is the step size)

f k = f (x + kh)

Trang 19

Now, we turn our attention to the high-order derivatives But, instead of ing the specific formulas, let’s make an algorithm to generate whatever differenceapproximation formula we want For instance, if we want to get the approxima-

deriv-tion formula of the second derivative based on the funcderiv-tion values f2, f1, f0, f−1,

( 3)

0 +( 2h)44! f

( 3)

0 +h44!f

( 3)

0 +h44!f

( 3)

0 +( 2h)44! f

We should solve the following set of equations to determine the coefficients

c2, c1, c0, c−1, and c−2 so as to make the expression conform to the second

Trang 20

218 NUMERICAL DIFFERENTIATION/ INTEGRATION

Table 5.3 The Difference Approximation Formulas for the First and Second Derivatives

O(h)forward difference approximation for the first derivative:

Trang 21

function [c,err,eoh,A,b] = difapx(N,points)

%difapx.m to get the difference approximation for the Nth derivative

if abs(err) < eps, err = A(L + 2,:)*c’; eoh = L - N + 1; end

if points(1) < points(2), c = fliplr(c); end

The procedure of setting up this equation and solving it is cast into theMATLAB routine “difapx()”, which can be used to generate the coefficients

of, say, the approximation formulas (5.1.7), (5.1.9), and (5.3.2) just for tice/verification/fun, whatever your purpose is

prac->>format rat %to make all numbers represented in rational form

>>difapx(1,[0 -2]) %1 st derivative based on {f0, f−1, f−2}

Example 5.1 Numerical/Symbolic Differentiation for Taylor Series Expansion.

Consider how to use MATLAB to get the Taylor series expansion of a

func-tion—say, e −x about x= 0—which we already know is

e −x = 1 − x + 1

2x

2− 13!x

3+ 14!x

4− 15!x

ž taylor(f)gives the fifth-order Maclaurin series expansion off

ž taylor(f,n + 1) with an integer n > 0 gives the nth-order Maclaurin

series expansion off

ž taylor(f,a)with a real number(a) gives the fifth-order Taylor series sion offabouta

Trang 22

expan-220 NUMERICAL DIFFERENTIATION/ INTEGRATION

ž taylor(f,n + 1,a)gives thenth-order Taylor series expansion offaboutdefault variable= a

ž taylor(f,n + 1,a,y)gives thenth-order Taylor series expansion off(y)abouty = a

(cf) The target function f must be a legitimate expression given directly as the first input argument.

(cf) Before using the command “ taylor() ”, one should declare the arguments of the function as symbols by putting the statement like “ syms x t ”.

(cf) In the case where the function has several arguments, it is a good practice to put the independent variable as the last input argument of “ taylor() ”, though taylor() takes one closest (alphabetically) to ‘ x ’ as the independent variable by default only

if it has been declared as a symbolic variable and is contained as an input argument

of the function f

(cf) One should use the MATLAB command “ sym2poly() ” if he wants to extract the coefficients from the Taylor series expansion obtained as a symbolic expression.

The following MATLAB program “nm5e01” finds us the coefficients of

fifth-order Taylor series expansion of e −x about x= 0 by using the two methods

%nm5e01:Nth-order Taylor series expansion for e^-x about xo in Ex 5.1 f=inline(’exp(-x)’,’x’);

tmp = tmp*i*h; %i!(factorial i)*h^i

c = difapx(i,[-i i]); %coefficient of numerical derivative

dix = c*feval(f,xo + [-i:i]*h)’; %/h^i; %derivative

T(i+1) = dix/tmp; %Taylor series coefficient

end

format rat, Tn = fliplr(T) %descending order

%Symbolic computation method

syms x; Ts = sym2poly(taylor(exp(-x),N + 1,xo))

%discrepancy

format short, discrepancy=norm(Tn - Ts)

DIFFERENTIAL

The difference approximation formulas derived in the previous sections are

appli-cable only when the target function f (x) to differentiate is somehow given In

this section, we think about how to get the numerical derivatives when we are

Trang 23

given only the data file containing several data points A possible measure is

to make the interpolating function by using one of the methods explained inChapter 3 and get the derivative of the interpolating function

For simplicity, let’s reconsider the problem of finding the derivative of f (x)=

sin x at x = π/4, where the function is given as one of the following data

We make the MATLAB program “nm540”, which uses the routine “lagranp()”

to find the interpolating polynomial, uses the routine “polyder()” to differentiatethe polynomial, and computes the error of the resulting derivative from the truevalue Let’s run it withxdefined appropriately according to the given set of datapoints and see the results

>>nm540

dfx( 0.78540) = 0.689072 (error: -0.018035) %with x = [1:3]*pi/8 dfx( 0.78540) = 0.706556 (error: -0.000550) %with x = [0:4]*pi/8 dfx( 0.78540) = 0.707072 (error: -0.000035) %with x = [2:6]*pi/16This illustrates that if we have more points that are distributed closer to the targetpoint, we may get better result

px = lagranp(x,y); % Lagrange polynomial interpolating (x,y)

dpx = polyder(px); % derivative of polynomial px

dfx = polyval(dpx, x0);

fprintf(’ dfx(%6.4f) = %10.6f (error: %10.6f) \n’, x0,dfx,dfx - df0); end

One more thing to mention before closing this section is that we have theMATLAB built-in routine “diff()”, which finds us the difference vector for agiven vector When the data points {(x k , f (x k )), k = 1, 2, } are given as an

Trang 24

222 NUMERICAL DIFFERENTIATION/ INTEGRATION

ASCII data file named “xy.dat”, we can use the routine “diff()” to get thedivided difference, which is similar to the derivative of a continuous function

>>load xy.dat %input the contents of ’xy.dat’ as a matrix named xy

>>dydx = diff(xy(:,2))./diff(xy(:,1)); dydx’ %divided difference dydx = 2.0000 0.50000 2.0000

The general form of numerical integration of a function f (x) over some interval [a, b] is a weighted sum of the function values at a finite number (N+ 1) ofsample points (nodes), referred to as ‘quadrature’:

Figure 5.3 shows the integrations over two segments by the midpoint rule,the trapezoidal rule, and Simpson’s rule, which are referred to as Newton–Cotesformulas for being based on the approximate polynomial and are implemented

by the following formulas

Trang 25

xk − 1 h xk h xk + 1 xk − 1 h xk h xk + 1 (a) The midpoint rule (b) The trapezoidal rule

xk − 1 h xk h xk + 1 (c) Simpson's rule

Figure 5.3 Various methods of numerical integration.

These three integration rules are based on approximating the target function(integrand) to the zeroth-, first- and second-degree polynomial, respectively Sincethe first two integrations are obvious, we are going to derive just Simpson’s rule

(5.5.4) For simplicity, we shift the graph of f (x) by −x k along the x axis,

or, equivalently, make the variable substitution t = x − x k so that the abscissas

of the three points on the curve of f (x) change from x = {x k − h, x k , x k + h}

to t = {−h, 0, +h} Then, in order to find the coefficients of the second-degree

Ngày đăng: 09/08/2014, 12:22

TỪ KHÓA LIÊN QUAN