Determine the Hermite polynomial of degree 5 which fits the following data and hence find an approximate value of loge 2.7 1 log e x ′ [Ans.. Construct the Hermite interpolation polynomi
Trang 1Hermite’s interpolation formula is
= [1–2 (x – x0) L0’ (x0)] [L0 (x)]2 y 0 + [1 – 2(x – x i ) L1’ (x1)] [L1(x)]2y1
+ (x – x0) [L0 (x)]2 y’0 + (x – x1) [L1 (x)]2
1
Now, L0 (x) = 1
=
L1 (x) = 0
∴ L x0′( ) = 1
a b− and L1′ (x) = 1
b−a
Hence, L x0′( )0 = 1
a b− and L1′ (x1) = 1
b−a
Therefore from equation (1)
(x a) x b 2 f a( ) (x b) x a 2 f b( )
H
2
a b+
= 1
2 f(a) +
1
2 f(b) +
8
b−a
f′(a) –( )
8
b−a
f ’(b)
f a + f b − ′ − ′
PROBLEM SET 5.4
1 Apply Hermite formula to find a polynomial which meets the following specifications:
k
k
k
x y
4 – 4x3 + 4x2]
Trang 22 Apply Hermite’s interpolation to find f(1.05) given:
[Ans 1.02470]
3 Apply Hermite’s interpolation to find log 2.05 given that:
1 log
x
[Ans 0.71784]
4 Determine the Hermite polynomial of degree 5 which fits the following data and hence find
an approximate value of loge 2.7
1 log
e
x
′
[Ans 0.993252]
5 Find y = f(x) by Hermite’s interpolation from the table:
Computer y2 and y 2′. [Ans 1 + x – x2 + 2x4, y2 = 31, y′2 = 61]
6 Compute e by Hermite’s formula for the function f(x) = e x at the points 0 and 1 Compare
the value with the value obtained by using Lagrange’s interpolation
[Ans (1 + 3x) (1 – x)2 + (2 – x) ex2; 1.644, 1.859]
Trang 37 Apply Hermite’s formula to find a polynomial which meets the following specifications:
( 3 5)
1
8 Apply osculating interpolation formula to find a polynomial which meets the following requirements:
[Ans x4 – 4x3 + 4x2]
9 Apply Hermite’s interpolation formula to find f(x) at x = 0.5 which meets the following
requirements:
10 Construct the Hermite interpolation polynomial that fits the data:
Estimate the value of f(1.5). [Ans 29.556x3 – 85.793x2 + 97.696x – 34.07; 19.19125]
11 (i) Construct the Hermite interpolation polynomial that fits the data:
Estimate the value of f(0.75).
Trang 4(ii) Construct the Hermite interpolation polynomial that fits the data
−
Interpolate y(x) at x = 0.5 and 1.5.
12 Obtain the unique polynomial p(x) of degree 3 or less corresponding to a funcion f(x) where f(0) = 1, f’(0) = 2, f(1) = 5, f’(1) = 4.
13 (i) Construct the Hermite interpolation polynomial that fits the data
Interpolate f(x) at x = 2.5
(ii) Fit the cubic polynomial P(x) = c0 + c1x + c2x2 + c3x3 to the data given in problem 13 (i).
Are these polynomials same?
5.7.1 Some Remarkable Points about Chosen Different Interpolation Formulae
We have derived some central difference interpolation formulae Obviously here a question arise that which one of these formulae gives the most accurate or approximate, nearest result? (1) If interpolation is required near the beginning or end of a given data, there is only alternative to apply Newton’s Forward and backward difference formulae
(2) For interpolation near the center of a given data, Stirling’s formula gives the best or most accurate result for –1
4 < u <
1
4 and Bessel’s formula is most efficient near u =
1 2
or 1
4 ≤ u≤ 3
4.
(3) But in the case where a series of calculations have to be made, it would be inconvenient
to use both these (Stirling’s & Bessel’s) formulae i.e., the choice depends on the order of
the highest differences that could be neglected so that contributions from it and further differences would be less than half a unit in the last decimal place If this highest difference
is of odd order Stirling’s formula is recommended; if it is even order Bessel’s formula might be preferred
(4) It is known from algebra that the nth degree polynomial which passes through (n + 1)
points is unique Hence the various interpolaion formulae derived here are actually, only different forms of the same polynomial Therefore all the interpolation formulae should give the same functional value
Trang 5(5) Here we discussed several interpolation formulae for equispaced argument value The most important thing about these formulae is that, the co-efficients in the central difference formulae are smaller and converges faster than those in Newton’s formulae After a few terms, the co-efficients in the stirling’s formula decrease more rapidly than those of the Bessel’s formulae and the co-efficient of Bessel’s formula decreases more rapidly than those of Newton’s formula Therefore, whenever possible central difference formulae should be used in preference to Newton’s formulae However, the right choice depends
on the position of the interpolated value in the given pairs of values
The Zig-Zag paths for various formulae
∆ 3
∆ 4
∆ 5
∆ 6
Newton’s Backward
Newton’s Forward Gauss Backward
Gauss Forward
Stirling
Bessel’s
Laplace Evertt’s
FIG 5.1
5.7.2 Approximation of Function
To evaluate most mathematical functions, we must first produce computable approximations to them Functions are defined in a variety of ways in applications, with integrals and infinite series being the most common types of formulas used for the definition Such a definition is useful in establishing the properties of the function, but it is generally not an efficient way to evaluate the function In this part we examine the use of polynomials as approximation to a given function
For evaluating a function f(x) on a computer it is generally more efficient of space and time
to have an analytic approximation to f(x) rather than to store a table and use interpolation i.e.,
function evaluation through interpolation techniques over stored table of values has been found
to be quite costlier when compared to the use of efficient function approximations It is also
Trang 6desirable to use the lowest possible degree of polynomial that will give the desired accuracy in
approximating f(x) The amount of time and effort expended on producing an approximation
should be directly proportional to how much the approximation will be used If it is only to be used
a few times, a truncated Taylor series will often suffice But if an approximation is to be used millions of times by many people, then much care should be used in producing the approximation There are forms of approximating function other than polynomials
Let f 1 , f2 f n be the values of given function andφ φ1, 2 φn be the corresponding values of the
approximating function Then the error vector is e where the components of e are given by
e i = f i –φi So the approximation may be chosen in two ways One is, to find the approximation
such that the quantity e21 +e22 + e2n is minimum This leads us to the least square approximation
Second is, choose the approximation such that the maximum components of e is minimized This
leads to Chebyshev polynomials which have found important applications in the approximation
of functions
(i) Approximation of function by Taylors series method: Taylor ’s series
approximation is one of the most useful series expressions of a function If a function f(x) has upto (n + 1)th derivatives in an interval [a, b], near x = x0 then it can be expressed as,
f (x) = f(x0) + f ’ (x0) (x – x0) + f ‘’(x0)( )2
0
2!
x−x
+ f n (x0)( 0)
!
n
x x n
−
+ f n+1 (s)× ( )
1 0
1 !
n
x x n
+
− + (1)
In the above expansion f ’(x0), f ’’(x0) etc., are the first, second derivatives of f(x) evaluated at x0
1 1
0
1 !
n n
n
+
+
is called the remainder term The quantity s is a number which is a function of x and lies between x and x0 The remainder term gives the truncation error if only the first n terms in the
Taylor series are used to represent the function The truncation error is thus:
Truncation error = ( ) ( )
1 1
0
1 !
n n
n
+
1 0
1 !
n
x x
M n
+
−
where M = max f n+1( )s for x in [a, b].
Obviously, the Taylor’s series is a polynomial with base function 1, (x – x0), (x – x0)2 ,
(x – x0)n The co-efficients are constants given by f(x0), f ’(x 0 ), f ’’(x0) f ’’(x0)/2! etc Thus the series can be written in the rested form
(ii) Approximation of function by Chebyshev polynomial: The polynomials are
linear combination of the monomials 1, x, x 2 x n An examination of the monomial in the interval
(–1, + 1) shows that each achieves its maximum magnitude 1 at x = ± 1 and minimum magnitude
O at x = 0.
y(x) = a0 + a1x + a2x2 + + a n x n
dropping the higher order terms or modification of the co-efficients a1, a2 a n will produce little
error for small x near zero But probably substantial error near the ends of the interval
Trang 7(x near ±1) In particular, it seems reasonable to look for other sets of simple related functions that have their extreme values well distributed on their interval (–1, 1) We want to find approximations which are fairly easy to generate and which reduce the maximum error to minimum value The cosine functions cosθ, cos 2θ, cos nθ appear to be good candidates The set of polynomials
T n (x) = cos nθ, n = 0, 1 generates from the sequence of cosine functions using the transformation.
θ = cos–1 x
is known as Chebyshev polynomial These polynomial are used in the theory of approximation of function
Chebyshev polynomials: Chebyshev polynomial T n (x) of the first kind of degree n over
the interval [–1, 1] is defined by the relation
Let cos–1 x = θ, so that x = cosθ
⇒ T n (x) = cos nθ
for n = 0, T0 (x) = 1
for n = 1, T1 (x) = x
The Chebyshev polynomials satisfy the recurrence relation
T n+1 (x) = 2x T n (x) – T n–1 (x) (2) which can be obtained easily using the following trigonometric identity
cos (n + 1)θ + cos (n – 1)θ = 2 cosθ cos nθ
Above recurrence relation can be used to generate successively all T n (x), as well as to express the powers of x in terms of the Chebyshev polynomials Some of the Chebyshev polynomials and the expansion for powers of x in terms of T n (x) are given as follows:
T2 (x) = 2x2 – 1, x2 = 1
2 (T0 (x) + T2 (x))
T3 (x) = 4x3 – 3x, x3 = ( 1 ( ) 3( ) )
1
4 T x +T x
T4 (x) = 8x4 – 8x2 + 1, x4 = ( 0 ( ) 2 ( ) 4 ( ) )
1
8 T x + T x +T x
T5 (x) = 16x5 – 20x3 + 5x, x5 = ( 1( ) 3( ) 5 ( ) ),
1
16 T x + T x +T x
T6 (x) = 32x6 – 48x4 + 18x2 – 1, x6 = ( 0 ( ) 2( ) 4 ( ) 6 ( ) )
1
T7 (x) = 64x7 – 112x5 + 56x3 – 7x,
T8 (x) = 128x8 – 256x6 + 160x4 – 32x2 + 1,
T9 (x) = 256x9 – 576x7 + 432x5 – 120x3 + 9x,
T10 (x) = 512x10 – 1280x8 + 1120x6 – 400x4 + 50x2 – 1 (3)
Trang 8Note that the co-efficient of x n in T n (x) is always 2 n–1 , and expression for x n i.e., 1, x, x2 x n
will be useful in the economization of power series
Further, these polynomials satisfy the differential equation
2
1 x d y x dy n y 0
dx dx
Also, the Chebyshev polynomials satisfy the orthogonality relation
( ) ( )
1
2 1
0, if ; 1
1
2
m n
x
m n
−
Another important property of these polynomials, is that, of all polynomials of degree n where the co-efficient of x n is unity, the polynomial 21–n T n (x) has the smallest least upper bound to its magnitude in the interval [–1, 1], i.e.,
1
This is called the minimax property
Here, p n (x) is any polynomial of degree n with leading co-efficient unity, Tn (x) is defined by
T n (x) = cos(n cos–1 x) = 2 n–1 x n – (7)
Because the maximum magnitude of T n (x) is one, the upper bound referred to is 1/2 n–1 i.e.,
21–n This is important because we will be able to write power-series representations of functions whose maximum errors are given in terms of this upper bound
Thus in Chebyshev approximation, the maximum error is kept down to a minimum This
is called as minimax principle and the polynomial P n (x) = 2 1–n T n (x); (n≥ 1) is called the minimax polynomial By this process we can obtain the best lower order approximation called the minimax approximation
Example 1 Find the best lower-order approximation to the cubic 2x 3 + 3x 2
Sol Using the relation gives in equation (3), we have
2x3 + 3x2 = 2 { ( ) ( ) } 2
1
= 3x2 +3
2 T1 (x) +1
2 T3 (x)
= 3x2 +3 1
2x+2 T3 (x), Since T 1 (x) = x
The polynomial 3x2 +3
2x is the required lower order approximation to the given cubic with
a maximum error± 12 in the range [–1, 1]
Trang 9Example 2 Obtain the best lower degree approximation to the cubic (x 3 + 2x 2 ), on the interval [–1, 1].
Sol We write
x3 + 2x2 =1
4 [3 T1 (x) + T3 (x)] + 2x2,
= 2x2 +3
4 T1 (x) +1
4 T3 (x),
= 2x2 + 3
4x +
1
4 T3 (x).
Hence, the polynomial2x2 +34x
is the required lower order approximation to the given
cubic
The error of this approximation on the interval [–1, 1] is
( )
3
max
Example 3 Use Chebyshev polynomials to find the best uniform approximation of degree 4 or less
to x 5 on [–1, 1].
Sol x5 in terms of Chebyshev polynomials can be written as
8T +16T+16T
Now T5 being polynomial of degree five therefore we omit the term 5
16
T
and approximate
f(x) = x5 by 1 3
8T 16T
Thus the uniform polynomial approximation of degree four or less to x5 is given by
x5 = 5
8 T1 + 5
16 T3 = 5
8x +
5
16 [4x3 – 3x]
16 x 4x
and the error of this approximation on [–1, 1] is 5
1 max
16 16
x
T
− ≤ ≤
Example 4 Find the best lower order approximation to the polynomial.
( ) x 2 x 3 x 4 x 5 1 1
2 6 24 120 2 2
= + + + + + ∈ − Sol On substituting x = 2ξ, we get
( )ξ = + +1 ξ ξ2+ξ3 + ξ4 + ξ5 , 1− < ξ <1
2 8 48 384 3840
Trang 10Above equation can be written in Chebyshev polynomals
y(ξ) = T0 (ξ) + 1
2 T1 (ξ) + 1
16 [T0 (ξ) + T2 (ξ)] 1 3 1( ) 3 ( )
= 1.063477T0( )ξ + 0.515788T1( )ξ + 0.063802T2( )ξ + 0.00529T3( )ξ
+ 0.000326T4( )ξ + 0.000052T5( )ξ
dropping the term containing T 5 ( )ξ , we get
y( )ξ = 1+ 0.4999186ξ + 0.125ξ2 + 0.0211589ξ3 + 0.0026041ξ4
Hence, y(x) = 1 + 0.999837x + 0.5x2 + 0.0211589x3 + 0.041667x4
Properties of Chebyshev polynomial Tn(x)
1 T n (x) is a polynomial of degree n.
2 T n (–x) = (–1) n T n (x),
which show that T n (x) is an odd function of x if n is odd and an even function of x if n is even.
3 T n( )x ≤ ∈ −,x [ 1,1]
4 T n (x) assumes extreme values at (n + 1) points
x m = cos( )mπ , m = 0, 1, 2, ,n and the extreme value of x m is (–1)m
5 1 ( ) ( )
1
0,
if m n
if m n
−
≠
∫
which can be proved easily by putting x = cos θ
Also T n (x) are orthogonal on the interval [–1, 1] with respect to the weight function
w(x) = 1/ (1−x2)
6 If P n (x) is a monic polynomial of degree n then 1 ( ) ( )
as minimax property since T x n( ) ≤1
Chebyshev polynomial approximation: Let f(x) be a continuous function defined on the interval [–1, 1] and let B0 + B 1 x + Bx2 + B n x n be the required minimax polynomial approximation for f(x).
Suppose f(x) = 0
2
a
1
i i i
a T
∞
=
∑ (x) is Chebyshev series expansion for f(x) Then the truncated series
of the partial sum