Taylor Series for Analytic Functions

Một phần của tài liệu basic training in mathematics a fitness program for science students r shankar 1995 edition (Trang 149 - 154)

We have seen that within the radius of convergence R, a power series can be used to define an analytic function. We now consider the theorem which addresses the reverse question of whether every analytic function can be expanded in a Taylor series. We will first discuss the proof and then consider the importance of the result. Even if you are not planning to follow the proof in detail, you may wish to digest the discussion that follows it.

Theorem 6.3. (Taylor). Let a be a point of analyticity of an analytic function f ( z) and R the distance to the nearest singularity. Then f can be expanded in a Taylor series centered at z =a for lz - ai < R.

Figure 6.5. Taylor series for J ( z) about z = a.

Proof: Let us consider a circle of radius r < R centered at a as shown in Fig. 6.5.

We then have due to Eqn. (6.4.21)

Now we use 1 z'- z

/(z) -/(a) = ~ { f(z') [ -1- - - 1-] dz'

211'1 }1•'-al=r z'- z z'- a

z- a 1 /( ') 1 1

= ~ Z ( I )( I )dz,

11'1 1•'-al=r z - z z -a

1 (z' -a)- (z- a)

(6.S.I) (6.5.2)

(6.5.3)

[ [•-al]nl

1 z- a (z- a)2 (z-a)n-1 ~

-,- } + -,- + ( z - a z - a z I - a )2 +" • + ( z -a I ) n -I + 1 - •' z a -a ,(6,5.4) where we have invoked an old result about series:

1 +r+ã .. rn-1 =_I_-~.

1-r 1-r (6.5.5)

Here we have chosen r = (z-a)/(z' -a) and shifted the rn /(1 - r) term to the left-hand side.

If we now invoke Eqns. (6.4.23,6.5.2) we find

J(z)- J(a) = (z-a)J<1l(a) + ~(z-a)2 J<2l(a) + • • ã + ~(z-a)n f<"l(a) + 'R, (6.5.6) where the remainder 'R is given by:

.,. =..!...f. "- 2 f(z')dz' 1 (~)n+l. (6.5, 7)

11'1 l•'-al=r z - z z'- a

On the circle of integration iz'-al = r, let us write dz' = re19' and let M be the largest absolute value of f. Then

I'RI < Mlz - ain+lr

- liz' - al-lz-allrn+l (6.5.8)

The details are left to the following exercise.

Problem 6.5.1. First recall that the sum of a set of complex numbers is bounded by the sum of the absolute values of the individual terms. Since the integral is a sum, this implies the integral is bounded by one in which we replace the Integrand and dz by their absolute values. You can now bound every factor in the Integrand, calculate the length of the integration contour, and af8ue that 1/(lz-z'i):::; liz'- ai-iz-all-1 by looking at Fig. 6.5.

Returning to Eqn. (6.5.8), since lz-ai/r < 1, and Mr liz' - ai-lz-all

is bounded, we have a series expansion where the remainder after n terms can be made arbitrarily small by sending n -+ oo, i.e., a genuine Taylor series. •

Let us note the following points implied by this theorem:

• The Taylor series for a function fails to converge only when a singularity is encountered as we go outward from the point of expansion. There are no mysterious breakdowns of the series as in real variables: Why is the series for f(x) = 1/(1 +x2 ) ill behaved at and beyond lxl = 1 when f itself is perfectly well-behaved for lxl = 1 and indeed for all x? (We don't have this query for 1/(1-x) which has a clear problem at x = 1.) The answer is clear only when we see this function as 1/(1 + z2 ) evaluated on the real axis: R = 1 because the function has poles at z = ±i.

Now, it is fairly clear why the series must break down at a pole: the function diverges there.

But it must also break down at a branch point around which the function is not single valued.

The breakdown of the series is mandated here because when the series is convergent, it is given by a finite single valued sum of single valued terms of the form znwhich cannot reproduce the multiple valued ness around the branch point.

• Here is another mystery from the real axis. Recall the function e-11"'2 • It has a well-defined limit and all its derivatives have a well-defined limit at x = 0, namely zero. Thus the origin seems to be a nonsingular point. Yet we cannot develop the function in a Taylor series there:

the function and all its derivatives vanish. The existence of such functions means that two different functions, f(x) and f(x) + e-11"'2 , can have the same derivatives (of all orders) at the origin. Once again we see the true picrure only in the complex plane. There we are assured that every function has a Taylor expansion about any analytic point. What about the function e-l/z2 , you say? How come it cannot be Taylor expanded at the origin, which seems like a very nice point? In turns out the origin is far from being a very nice point, it is home to one of the nastiest singularities, as can be seen in many equivalent ways. First, note that the function is defined by the series

e-1/z2 = ~ (-1)n.

~ z2nnJ (6.5.9)

0

This is an expansion in the variable 1/z2 and the ratio test tells us it converges for all 1/z2 that are finite. In particular it does not converge at the origin: the origin is an essential singularity. While we see no sign of this along the real axis, we see upon setting z = iy that the function blows up as e11Y2 as we approach the origin from the imaginary direction. The same goes for all its derivatives along this axis. Thus the function really has no unique limit at the origin and its derivatives are likewise highly sensitive to the direction of approach.

The vanishing x-derivatives are not the bona fide direction-independent derivatives of an analytic function at a point of analyticity. Any notion that we can have a legitimate Taylor series at the origin is clearly a mistake that comes from looking along just one of infinitely possible directions in the complex plane. The origin is a singular point, and one has to go to the complex plane to appreciate that. In short, there are no exceptions to the rule that an analytic function can be expanded in a Taylor series about any point of analyticity. In particular, a function with all derivatives equal to zero at a point of analyticity is identically zero. For this reason, no two distinct functions can have the same Taylor series: it they did, their difference would have a vanishing Taylor series and would equal zero everywhere, i.e., the functions would be identical, contrary to the assumption.

• Consider the question of defining a function outside the radius of convergence of the Taylor series, taking as an example f(z) = 1/(1-z) whose Taylor series 1+z+z2+ ... converged only inside the unit disc, shown by C1 in Fig. 6.6.

z

Figure 6.6. Analytic continuation.

Inside the unit circle C1, the series converges. It also reproduces the given function: since the Taylor series is a geometric series we can sum it (for lzl < 1) and reproduce 1/(1-z).

We know it diverges at some point(s) on the unit circle. Peeking into thE! closed form result, we know this happens at z = 1. If we go outside the unit circle C1, the series has no meaning, while we know the function itself has a meaning everywhere except at z = 1. We generate the function outside the unit circle as follows. We first go to another point within the unit circle, say z = -1/2. Since we know I (from the series) in the neighborhood of this point, we can compute in principle Its derivatives and launch a new Taylor series centered at this point. The radius for this new series will be determined by the singularity nearest to this point. Peeking Into the closed form solution, we know this is still the pole at z = 1 which now lies 1.5 units away. Thus the expansion will converge within a circle 03; 2 of radius 1.5 centered at the new point. (If we did not have the closed form answer at hand, the ratio test would give us the new radius, but not the location of the singularity on the circumference.) The new series will agree with the old one within the unit circle and define the function in the nonoverlap region. We have clearly defined the function in a bigger patch of the complex plane, armed with just the Taylor series at the origin. But will this definition agree with any other, say the closed form expression1 Yes, because the series centered at the origin, and the closed form expression 1/(1-z), are numerically equal in the entire unit circle and in particular at z = -1/2, as are all their derivatives. (Imagine extracting the derivatives by surrounding the point z = -1/2 by a circle and using Eqn. (6.4.23). Thus the series representing the function and the closed form for it will generate the same Taylor series at the point z = -1/2 which therefore agree throughout the circle of radius 3/2 centered there. This process can be continued indefinitely, as shown by a few other circles centered at various points. We will find in the end that I can be defined everywhere except at z = 1.

In particular if we work our way to z = 2, we will get the value 1(2) = -1. Of course in the present problem we need not go to all this trouble: the closed form l(z) = 1/(1 - z)

defines it for all z :.;, 1. But the main point has been to show that the Taylor series at the origin contains all the information that the closed form expression did, albeit in not such a compact and convenient form. This Is very important since there are many functions which we know only through their series about some point.

Here are some details on the Taylor expansion centered at z = -1/2. First let us use the closed form to generate the Taylor series. There Is no need to compute all its derivatives there, we simply rewrite it to obtain the following Taylor expansion in the variable z - (-!) =

z+ l:

1

1-z =

= +

1 J-(z+j) 1-i(z+!) i

i [1 +i [z+~] + [i<z+~f +"ã

[i<z+~>r + .. ã] •

(6.5.10) (6.5.11)

(6.5.12) which converges for fiz +!I < 1 which is the circle of radius 3/2 centered at z = -l

alluded to.

Now we compare this to what we get from the power series at the origin. From the series for l(z) we find its derivative

1(1l(z) = .!!.11 +z+z2 +z3+ .. ã) = 1 +2z+3z2 +4z3 + .. +nzn-1 + ... (6.5.13) dz

The ratio test says the series for the derivative converges for

lzl < R = lim ___!__ = 1,

n .... oo n + 1 (6.5.14)

i.e., again within the unit circle. Its value at z = -l is

1 34 5 6 7 4

I< >(-1/2) = 1-1 +--- + - - - +- ... 4 8 16 32 64 = -. 9 (6.5.15)

This coincides with the first derivative in the expansion Eqn. (6.5.12). In other words, the derivative of the series for the function is the series for the derivative of the function within the unit circle. This will keep happening for every derivative. (We are simply using the result that the series for a function may be differentiated term by term to reproduce the series for its derivative, both series having the same radius of convergence.) Thus both schemes (based on the Taylor series at the origin and the closed form expression) will produce the same function inside 03; 2 and in all other continuations.

• We consider now the notion called the Permanence of Functional Relations. Why does a relation like sin2 z + cos2 z = 1 holds off the real axis given that it does on the real axis? To see this consider the function l(z) = sin2 z + cos2 z-1. It vanishes everywhere along the x-axis. Therefore so do all its derivatives along the :z:-axis, in particular, at the origin. But the origin is a point of analyticity of this function (which is made of the sine and cosine) and the x-derivatives in question are the direction independent z-derivatives. Therefore I has a Taylor expansion and this expansion gives identically zero in the complex plane. Therefore

1 will be identically zero, i.e., sin2 z will continue to equal 1-cos2 z in the complex plane.

More generally, if I and g are two functions which agree on a line segment which lies entirely within the domain of analyticity of both functions, their continuation outside this domain will also agree since the difference 1-g has a vanishing Taylor series (the derivatives being taken along the segment) and therefore vanishes. Equivalently, if we construct the Taylor series for I and g starting at some point on the segment, the series will be identical since the functions are numerically identical and hence possess identical derivatives. For this reason, the two series will have the same radius of convergence and run into the same singularities at the boundary.

If we now move to another point in this circle and set up a new series to effect an analytical continuation, these new series will again be identical for the same reason as before, and the two functions will forever be equal. Thus given a relation like cosh:z: = (e"' + e-"')/2 on the real axis, it follows that cosh z = (e* + e-*)/2 for all z.

let us note that the relation In question must be between analytic functions before it can be analytically continued. Here is an example that is not analytic. Recallth.lt for real :z; we had

cos:z; = [ei"'] + 2 [e; .. r (6.5.16)

This Is not a relation that is preserved upon continuation. In other words

[elz] + [ei•)*

COl% :;, ..._...___,..._..._

2 (6.5.17)

for z off the real axis.

Problem 6.5.2. Compare the lwo sides of the above relation on the imasinary axis z = iy and show that they do not match.

The problem is that the above relation involves complex conjugation which converts the allowed variable z to the forbidden one z•. Does this mean we cannot relate the exponential to the sines and cosines after continuation into the complex plane? Not so, as we have already seen. Instead or thinking of e-i:r as the complex conjugate of /(:z:) = ei"', think of it as /(-:z:). Then we can view the above relation as

ei• + e-12

cosz =

2 (6.5.18)

evaluated on the real axis. But now we have a relation between the analytic functions coaz, e±i%, valid in the complex plane.

Problem 6.5.3. On the real axis we hade1"' = cos:z:+isin:z:. Is this an analytic relationship which will survive the continuation to complex z 1

Problem 6.5.4. On the real axis we have ( ei"'] [ ei"'] • = 1. Is it true that ( eiz J ( eiz] • = 1 i Give reasons for your answer and then the supportins calculation.

Một phần của tài liệu basic training in mathematics a fitness program for science students r shankar 1995 edition (Trang 149 - 154)

Tải bản đầy đủ (PDF)

(371 trang)