raw_input"\nPress return to exit"Because the roots are computed with finite accuracy, each deflation introducessmall errors in the coefficients of the deflated polynomial.. First Noncent
Trang 1165 4.6 Systems of Equations
The trajectory of a satellite orbiting the earth is
1+ e sin(θ + α) where (R, θ) are the polar coordinates of the satellite, and C, e, and α are con-
stants (e is known as the eccentricity of the orbit) If the satellite was observed at
the three positions
v
θ
A projectile is launched at O with the velocity v at the angle θ to the horizontal.
The parametric equation of the trajectory is
Trang 2166 Roots of Equations
The three angles shown in the figure of the four-bar linkage are related by
150 cosθ1+ 180 cos θ2− 200 cos θ3 = 200
150 sinθ1+ 180 sin θ2− 200 sin θ3 = 0Determineθ1andθ2whenθ3= 75◦ Note that there are two solutions.
where T is the horizontal component of the cable force (it is the same in all
seg-ments of the cable) In addition, there are two geometric constraints imposed bythe positions of the supports:
−4 sin θ1− 6 sin θ2+ 5 sin θ2= −3
4 cosθ1+ 6 cos θ2+ 5 cos θ3= 12Determine the anglesθ1,θ2, andθ3
The polynomial equation P n (x) = 0 has exactly n roots, which may be real or
complex If the coefficients are real, the complex roots always occur in conjugate
pairs (x + ix , x − ix ), where x and x are the real and imaginary parts, respectively
Trang 3167 ∗ 4.7 Zeroes of Polynomials
For real coefficients, the number of real roots can be estimated from the rule of
Descartes:
• The number of positive, real roots equals the number of sign changes in the
ex-pression for P n (x), or less by an even number.
• The number of negative, real roots is equal to the number of sign changes in
P n(−x), or less by an even number
As an example, consider P3(x) = x3− 2x2− 8x + 27 Because the sign changes twice, P3(x)= 0 has either two or zero positive real roots On the other hand,
P3(−x) = −x3− 2x2+ 8x + 27 contains a single sign change; hence P3(x) possesses
one negative real zero
The real zeroes of polynomials with real coefficients can always be computed byone of the methods already described But if complex roots are to be computed, it
is best to use a method that specializes in polynomials Here we present a methoddue to Laguerre, which is reliable and simple to implement Before proceeding to La-guerre’s method, we must first develop two numerical tools that are needed in anymethod capable of determining the zeroes of a polynomial The first of these is anefficient algorithm for evaluating a polynomial and its derivatives The second algo-
rithm we need is for the deflation of a polynomial, that is, for dividing the P n (x) by
x − r , where r is a root of P n (x) = 0.
Evaluation Polynomials
It is tempting to evaluate the polynomial in Eq (4.9) from left to right by the following
algorithm (we assume that the coefficients are stored in the array a):
p = 0.0 for i in range(n+1):
p = p + a[i]*x**i
Because x k is evaluated as x × x × · · · × x (k − 1 multiplications), we deduce that
the number of multiplications in this algorithm is
1+ 2 + 3 + · · · + n − 1 = 1
2n(n− 1)
If n is large, the number of multiplications can be reduced considerably if we evaluate
the polynomial from right to left For an example, take
P4(x) = a0+ a1x + a2x2+ a3x3+ a4x4
After rewriting the polynomial as
P (x) = a + x {a + x [a + x (a + xa )]}
Trang 4P0(x) = a n
P i (x) = a n −i + x P i−1(x), i = 1, 2, , n (4.10)leading to the algorithm
proce-Some root-finding algorithms, including Laguerre’s method, also require
evalua-tion of the first and second derivatives of P n (x) From Eq (4.10) we obtain by
## module evalPoly
’’’ p,dp,ddp = evalPoly(a,x).
Evaluates the polynomial
p = a[0] + a[1]*x + a[2]*xˆ2 + + a[n]*xˆn with its derivatives dp = p’ and ddp = p’’
Trang 5169 ∗ 4.7 Zeroes of Polynomials
dp = 0.0 + 0.0j ddp = 0.0 + 0.0j for i in range(1,n+1):
cause the degree of the polynomial is reduced every time a root is found Moreover,
by eliminating the roots that have already been found, the chances of computing thesame root more than once are eliminated
Laquerre’s formulas are not easily derived for a general polynomial P n (x) However,
the derivation is greatly simplified if we consider the special case where the
polyno-mial has a zero at x = r and (n − 1) zeroes at x = q Hence, the polynomial can be
Trang 6170 Roots of Equations
written as
P n (x) = (x − r )(x − q) n−1 (a)Our problem is now this: given the polynomial in Eq (a) in the form
P n (x) = a0+ a1x + a2x2+ · · · + a n x n
determine r (note that q is also unknown) It turns out that the result, which is
ex-act for the special case considered here, works well as an iterative formula with anypolynomial
Differentiating Eq (a) with respect to x, we get
P n(x) = (x − q) n−1+ (n − 1)(x − r )(x − q) n−2
= P n (x)
$1
1 Let x be a guess for the root of P n (x)= 0 (any value will do)
2 Evaluate P n (x), P n(x), and P n(x) using the procedure outlined in Eqs (4.11).
3 Compute G(x) and H(x) from Eqs (4.14).
4 Determine the improved root r from Eq (4.16) choosing the sign that results in the
larger magnitude of the denominator (this can be shown to improve convergence).
5 Let x ← r and repeat steps 2–5 until |P n (x)| < ε or |x − r | < ε, where ε is the error
tolerance
Trang 7171 ∗ 4.7 Zeroes of Polynomials
One nice property of Laguerre’s method is that it converges to a root, with very
few exceptions, from any starting value of x
polyRootsThe functionpolyRootsin this module computes all the roots of P n (x)= 0, where
the polynomial P n (x) defined by its coefficient array a = [a0, a1, , a n] After the firstroot is computed by the nested functionlaguerre, the polynomial is deflated using deflPolyand the next zero computed by applyinglaguerreto the deflated poly-
nomial This process is repeated until all n roots have been found If a computed root
has a very small imaginary part, it is more than likely that it represents roundoff error.Therefore,polyRootsreplaces a tiny imaginary part by zero
from evalPoly import * from numpy import zeros,complex from cmath import sqrt
from random import random def polyRoots(a,tol=1.0e-12):
def laguerre(a,tol):
x = random() # Starting value (random number)
n = len(a) - 1 for i in range(30):
for i in range(n-2,-1,-1):
b[i] = a[i+1] + root*b[i+1]
return b
n = len(a) - 1 roots = zeros((n),dtype=complex)
Trang 8raw_input("\nPress return to exit")
Because the roots are computed with finite accuracy, each deflation introducessmall errors in the coefficients of the deflated polynomial The accumulated roundofferror increases with the degree of the polynomial and can become severe if the poly-nomial is ill conditioned (small changes in the coefficients produce large changes inthe roots) Hence, the results should be viewed with caution when dealing with poly-nomials of high degree
The errors caused by deflation can be reduced by recomputing each root usingthe original, undeflated polynomial The roots obtained previously in conjunctionwith deflation are employed as the starting values
P3(x) = 3x3+ 8x2− 2
EXAMPLE 4.11
A root of the equation P3(x) = x3− 4.0x2− 4.48x + 26.1 is approximately x = 3 − i.
Find a more accurate value of this root by one application of Laguerre’s iterativeformula
Solution Use the given estimate of the root as the starting value Thus,
x = 3 − i x2= 8 − 6i x3= 18 − 26i
Trang 9Thanks to the good starting value, this approximation is already quite close to the
exact value r = 3.20 − 0.80i.
Trang 10174 Roots of Equations
EXAMPLE 4.12
UsepolyRootsto compute all the roots of x4− 5x3− 9x2+ 155x − 250 = 0.
Solution The commands
>>> from polyRoots import *
>>> print polyRoots([-250.0,155.0,-9.0,-5.0,1.0])resulted in the output
Problems 6–9 A zero x = r of P n (x) is given Determine all the other zeroes of P n (x)
by using a calculator You should need no tools other than deflation and the quadraticformula
Trang 11The most prominent root-finding algorithm omitted from this chapter is Brent’s
method, which combines bisection and quadratic interpolation It is potentially more
efficient than Ridder’s method, requiring only one function evaluation per iteration(as compared to two evaluations in Ridder’s method), but this advantage is somewhatnegated by elaborate bookkeeping
Trang 12176 Roots of Equations
There are many methods for finding zeroes of polynomials Of these, the
Jenkins-Traub algorithm2deserves special mention because of its robustness and widespreaduse in packaged software
The zeroes of a polynomial can also be obtained by calculating the eigenvalues
of the n × n “companion matrix”
which is equivalent to P n (x) = 0 Thus the eigenvalues of A are the zeroes of P n (x).
The eigenvalue method is robust, but considerably slower than Laguerre’s method.But it is worthy of consideration if a good program for eigenvalue problems isavailable
2 M Jenkins and J Traub, SIAM Journal on Numerical Analysis, Vol 7 (1970), p 545.
Trang 135 Numerical Differentiation
Given the function f (x), compute d n f /dx n at given x
5.1 Introduction
Numerical differentiation deals with the following problem: We are given the
func-tion y = f (x) and wish to obtain one of its derivatives at the point x = x k The term
“given” means that we either have an algorithm for computing the function or
pos-sess a set of discrete data points (x i , y i ), i = 0, 1, , n In either case, we have cess to a finite number of (x, y) data pairs from which to compute the derivative If
ac-you suspect by now that numerical differentiation is related to interpolation, ac-you areright – one means of finding the derivative is to approximate the function locally by
a polynomial and then differentiate it An equally effective tool is the Taylor series
expansion of f (x) about the point x k, which has the advantage of providing us withinformation about the error involved in the approximation
Numerical differentiation is not a particularly accurate process It suffers from aconflict between roundoff errors (due to limited machine precision) and errors inher-ent in interpolation For this reason, a derivative of a function can never be computedwith the same precision as the function itself
5.2 Finite Difference Approximations
The derivation of the finite difference approximations for the derivatives of f (x) is based on forward and backward Taylor series expansions of f (x) about x, such as
f (x + h) = f (x) + hf(x)+h2
2!f
(x)+h33!f
(x)+h44!f(4)(x)+ · · · (a)
f (x − h) = f (x) − hf(x)+h2
2!f
(x)−h33!f
(x)+h44!f(4)(x)− · · · (b)
177
Trang 14We also record the sums and differences of the series:
f (x + h) + f (x − h) = 2f (x) + h2f(x)+ h4
12f(4)(x)+ · · · (e)
can be solved for various derivatives of f (x) The number of equations involved and
the number of terms kept in each equation depend on the order of the derivative andthe desired degree of accuracy
First Central Difference Approximations
The solution of Eq (f ) for f(x) is
f(x)= f (x + h) − f (x − h)
6 f
(x)− · · ·or
f(x)= f (x + h) − f (x − h)
which is called the first central difference approximation for f(x) The term O(h2)
reminds us that the truncation error behaves as h2
Similarly, Eq (e) yields the first central difference approximation for f(x):
f(x)= f (x + h) − 2f (x) + f (x − h)
12f(4)(x) +
or
f(x)= f (x + h) − 2f (x) + f (x − h)
Central difference approximations for other derivatives can be obtained from
Eqs (a)–(h) in the same manner For example, eliminating f(x) from Eqs (f ) and (h) and solving for f(x) yields
f(x)= f (x + 2h) − 2f (x + h) + 2f (x − h) − f (x − 2h)
Trang 15179 5.2 Finite Difference Approximations
f(4)(x)= f (x + 2h) − 4f (x + h) + 6f (x) − 4f (x − h) + f (x − 2h)
is available from Eqs (e) and (g) after eliminating f(x) Table 5.1 summarizes the
results
First Noncentral Finite Difference Approximations
Central finite difference approximations are not always usable For example,
con-sider the situation where the function is given at the n discrete points x0, x1, , x n
Because central differences use values of the function on each side of x, we would
be unable to compute the derivatives at x0and x n Clearly, there is a need for finite
difference expressions that require evaluations of the function on only one side of x These expressions are called forward and backward finite difference approximations.
Noncentral finite differences can also be obtained from Eqs (a)–(h) Solving Eq
(a) for f(x), we get
Keeping only the first term on the right-hand side leads to the first forward difference
Trang 16Second Noncentral Finite Difference Approximations
Finite difference approximations ofO(h) are not popular, for reasons to be explained
shortly The common practice is to use expressions ofO(h2) To obtain noncentraldifference formulas of this order, we have to retain more terms in the Taylor series As
an illustration, we derive the expression for f(x) We start with Eqs (a) and (c), which
f (x + 2h) = f (x) + 2hf(x) + 2h2f(x)+4h3
3 f
(x)+2h4
3 f(4)(x)+ · · ·
We eliminate f(x) by multiplying the first equation by 4 and subtracting it from the
second equation The result is
f (x + 2h) − 4f (x + h) = −3f (x) − 2hf(x)+h4
2 f(4)(x)+ · · ·
Therefore,
f(x)=−f (x + 2h) + 4f (x + h) − 3f (x)
4 f(4)(x)+ · · ·
or
f(x)= −f (x + 2h) + 4f (x + h) − 3f (x)
Equation (5.8) is called the second forward finite difference approximation.
The derivation of finite difference approximations for higher derivatives involve
additional Taylor series Thus the forward difference approximation for f(x) utilizes
Trang 17181 5.2 Finite Difference Approximations
Table 5.3b Coefficients of backward finite difference approximations ofO(h2)
series for f (x + h), f (x + 2h), and f (x + 3h); the approximation for f(x) involves Taylor expansions for f (x + h), f (x + 2h), f (x + 3h), f (x + 4h), and so on As you can
see, the computations for high-order derivatives can become rather tedious The sults for both the forward and backward finite differences are summarized in Tables5.3a and 5.3b
re-Errors in Finite Difference Approximations
Observe that in all finite difference expressions, the sum of the coefficients is zero
The effect on the roundoff error can be profound If h is very small, the values of
f (x), f (x ± h), f (x ± 2h), and so forth will be approximately equal When they are
multiplied by the coefficients and added, several significant figures can be lost On
the other hand, we cannot make h too large, because then the truncation error would
become excessive This unfortunate situation has no remedy, but we can obtain somerelief by taking the following precautions:
• Use double-precision arithmetic.
• Employ finite difference formulas that are accurate to at least O(h2)
To illustrate the errors, let us compute the second derivative of f (x) = e −x at
x= 1 from the central difference formula, Eq (5.2) We carry out the calculations
with six- and eight-digit precision, using different values of h The results, shown in Table 5.4, should be compared with f(1)= e−1= 0.367 879 44.
In the six-digit computations, the optimal value of h is 0 08, yielding a result
ac-curate to three significant figures Hence, three significant figures are lost because of a
combination of truncation and roundoff errors Above optimal h, the dominant error
is due to truncation; below it, the roundoff error becomes pronounced The best sult obtained with the eight-digit computation is accurate to four significant figures
Trang 18differ-Because the extra precision decreases the roundoff error, the optimal h is smaller
(about 0.02) than in the six-figure calculations.
5.3 Richardson Extrapolation
Richardson extrapolation is a simple method for boosting the accuracy of certain merical procedures, including finite difference approximations (we also use it later inother applications)
nu-Suppose that we have an approximate means of computing some quantity G Moreover, assume that the result depends on a parameter h Denoting the approxi- mation by g(h), we have G = g(h) + E(h), where E(h) represents the error Richard- son extrapolation can remove the error, provided that it has the form E (h) = ch p , c and p being constants We start by computing g(h) with some value of h, say, h = h1
In that case we have
which is the Richardson extrapolation formula It is common practice to use h2=
h1/2, in which case Eq (5.8) becomes
G= 2p g(h1/2) − g(h1)
Trang 19183 5.3 Richardson Extrapolation
Let us illustrate Richardson extrapolation by applying it to the finite difference
approximation of (e −x)at x= 1 We work with six-digit precision and utilize the sults in Table 5.4 Because the extrapolation works only on the truncation error, we
re-must confine h to values that produce negligible roundoff In Table 5.4 we have
g(0.64) = 0.380 610 g(0.32) = 0.371 035
The truncation error in central difference approximation is E (h) = O(h2)= c1h2+
c2h4+ c3h6+ Therefore, we can eliminate the first (dominant) error term if we substitute p = 2 and h1= 0.64 in Eq (5.9) The result is
Use the data in Example 5.1 to compute f(0) as accurately as you can
Solution One solution is to apply Richardson extrapolation to finite difference
ap-proximations We start with two forward difference approximations ofO(h2) for f(0):
Trang 20Recall that the error in both approximations is of the form E (h) = c1h2+ c2h4+
c3h6+ We can now use Richardson extrapolation, Eq (5.9), to eliminate the inant error term With p= 2 we obtain
d − a cos α − b cos β2
+a sin α + b sin β2
− c2= 0For a given value ofα, we can solve this transcendental equation for β by one of the
root-finding methods in Chapter 2 This was done withα = 0◦, 5◦, 10◦, , 30◦, theresults being
If link A B rotates with the constant angular velocity of 25 rad/s, use finite difference
approximations ofO(h2) to tabulate the angular velocity d β/dt of link BC against α.
Solution The angular speed of BC is
Trang 21185 5.4 Derivatives by Interpolation
where d β/dα can be computed from finite difference approximations using the data
in the table Forward and backward differences ofO(h2) are used at the endpoints,central differences elsewhere Note that the increment ofα is
by the derivative of the interpolant This method is particularly useful if the data
points are located at uneven intervals of x, when the finite difference approximations
listed in the last section are not applicable.1
Polynomial Interpolant
The idea here is simple: fit the polynomial of degree n
P n−1(x) = a0+ a1x + a2x2+ · · · + a n x n
through n + 1 data points and then evaluate its derivatives at the given x As pointed
out in Section 3.2, it is generally advisable to limit the degree of the polynomial toless than 6 in order to avoid spurious oscillations of the interpolant Because theseoscillations are magnified with each differentiation, their effect can devastating Inview of this limitation, the interpolation is usually a local one, involving no more than
a few nearest-neighbor data points
For evenly spaced data points, polynomial interpolation and finite differenceapproximations produce identical results In fact, the finite difference formulas areequivalent to polynomial interpolation
1 It is possible to derive finite difference approximations for unevenly spaced data, but they would not be as accurate as the formulas derived in Section 5.2.
Trang 22186 Numerical Differentiation
Several methods of polynomial interpolation were introduced in Section 3.2 fortunately, none of them are suited for the computation of derivatives of the inter-
Un-polant The method that we need is one that determines the coefficients a0, a1, , a n
of the polynomial There is only one such method discussed in Chapter 3: the
least-squares fit Although this method is designed mainly for smoothing of data, it will
carry out interpolation if we use m = n in Eq (3.22) – recall that m is the degree of the interpolating polynomial and n+ 1 represents the number of data points to be fitted
If the data contains noise, then the least-squares fit should be used in the smoothing
mode, that is, with m < n After the coefficients of the polynomial have been found,
the polynomial and its first two derivatives can be evaluated efficiently by the tionevalPolylisted in Section 4.7
func-Cubic Spline Interpolant
Because of to its stiffness, the cubic spline is a good global interpolant; moreover, it
is easy to differentiate The first step is to determine the second derivatives k iof thespline at the knots by solving Eqs (3.11) This can be done with the functioncurva- turesin the modulecubicSplinelisted in Section 3.3 The first and second deriva-tives are then computed from
&
3(x − x i)2
x i − x i+1 − (x i − x i+1)
'+y i − y i+1
nearest-Solution of Part (1) The interpolant is P2(x) = a0+ a1x + a2x2passing through the
points at x = 1.9, 2.1, and 2.4 The normal equations, Eqs (3.22), of the least-squares