1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists 78part potx

7 127 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Second-order nonlinear differential equations
Thể loại bài tập
Định dạng
Số trang 7
Dung lượng 407,19 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In the general case, when using the method of matched asymptotic expansions, the position of the boundary layer and the form of the inner extended variable have to be determined in the c

Trang 1

2 We now consider an equation of the general form

εy 

xx = F (x, y, y  x) (12.3.5.42) subject to boundary conditions (12.3.5.33)

For the leading term of the outer expansion y = y0(x) + · · · , we have the equation

F (x, y0, y0) =0

In the general case, when using the method of matched asymptotic expansions, the position of the boundary layer and the form of the inner (extended) variable have to be determined in the course of the solution of the problem

First we assume that the boundary layer is located near the left boundary In (12.3.5.42),

we make a change of variable z = x/δ(ε) and rewrite the equation as

y 

zz = δ

2

ε F



δz, y, 1

δ y

 z



(12.3.5.43)

The function δ = δ(ε) is selected so that the right-hand side of equation (12.3.5.43) has a nonzero limit value as ε →0, provided that z, y, and y z  are of the order of 1

Example 5 For F (x, y, y x  ) = –kx λ x  + y, where 0 ≤ λ < 1, the substitution z = x/δ(ε) brings

equation (12.3.5.42) to

y zz= –δ

1 +λ

ε kz

λ 

z+ δ

2

ε y.

In order that the right-hand side of this equation has a nonzero limit value as ε →0, one has to set δ1+λ /ε= 1

or δ1+λ /ε = const, where const is any positive number It follows that δ = ε1 +1λ.

The leading asymptotic term of the inner expansion in the boundary layer, y = 2y0(z) + · · · , is determined

by the equation2y 

0 + kz λ 2y 

0 = 0, where the prime denotes differentiation with respect to z.

If the position of the boundary layer is selected incorrectly, the outer and inner expansions cannot be matched In this situation, one should consider the case where an arbitrary boundary layer is located on the right (this case is reduced to the previous one with the

change of variable x =1– z) In Example 5 above, the boundary layer is on the left if k >0

and on the right if k <0

There is a procedure for matching subsequent asymptotic terms of the expansion (see the seventh row and last column in Table 12.3) In its general form, this procedure can be represented as

inner expansion of the outer expansion (y-expansion for x →0)

= outer expansion of the inner expansion ( 2y-expansion for z → ∞).

Remark 1 The method of matched asymptotic expansions can also be applied to construct periodic solutions of singularly perturbed equations (e.g., in the problem of relaxation oscillations of the Van der Pol oscillator).

Remark 2 Two boundary layers can arise in some problems (e.g., in cases where the right-hand side of

equation (12.3.5.42) does not explicitly depend on y x ).

Remark 3 The method of matched asymptotic expansions is also used for solving equations (in

semi-infinite domains) that do not degenerate at ε =0 In such cases, there are no boundary layers; the original variable is used in the inner domain, and an extended coordinate is introduced in the outer domain.

Remark 4 The method of matched asymptotic expansions is successfully applied for the solution of various problems in mathematical physics that are described by partial differential equations; in particular, it plays an important role in the theory of heat and mass transfer and in hydrodynamics.

Trang 2

12.3.6 Galerkin Method and Its Modifications (Projection Methods)

12.3.6-1 General form of an approximate solution

Consider a boundary value problem for the equation

with linear homogeneous boundary conditions* at the points x = x1 and x = x2 (x1≤xx2).

Here, F is a linear or nonlinear differential operator of the second order (or a higher order

operator); y = y(x) is the unknown function and f = f (x) is a given function It is assumed

that F[0] =0

Let us choose a sequence of linearly independent functions (called basis functions)

ϕ = ϕ n (x) (n =1,2, , N ) (12.3.6.2)

satisfying the same boundary conditions as y = y(x) According to all methods that will

be considered below, an approximate solution of equation (12.3.6.1) is sought as a linear combination

y N =

N



n=1

A n ϕ n (x), (12.3.6.3)

with the unknown coefficients A n to be found in the process of solving the problem

The finite sum (12.3.6.3) is called an approximation function The remainder term

R N obtained after the finite sum has been substituted into the left-hand side of equation

(12.3.6.1),

R N =F[y N ] – f (x). (12.3.6.4)

If the remainder R N is identically equal to zero, then the function y N is the exact

solution of equation (12.3.6.1) In general, R N 0

12.3.6-2 Galerkin method

In order to find the coefficients A n in (12.3.6.3), consider another sequence of linearly independent functions

ψ = ψ k (x) (k =1, 2, , N ). (12.3.6.5)

Let us multiply both sides of (12.3.6.4) by ψ k and integrate the resulting relation over the

region V = {x1 ≤ xx2}, in which we seek the solution of equation (12.3.6.1) Next,

we equate the corresponding integrals to zero (for the exact solutions, these integrals are equal to zero) Thus, we obtain the following system of linear algebraic equations for the

unknown coefficients A n:

 x2

x1 ψ k R N dx=0 (k =1, 2, , N ). (12.3.6.6) Relations (12.3.6.6) mean that the approximation function (12.3.6.3) satisfies equation

(12.3.6.1) “on the average” (i.e., in the integral sense) with weights ψ k Introducing

* Nonhomogeneous boundary conditions can be reduced to homogeneous ones by the change of variable

z = A2x2+ A1x + A0+ y (the constants A2, A1, and A0 are selected using the method of undetermined coefficients).

Trang 3

the scalar product 〈g, h〉 =  x2

x1 gh dx of arbitrary functions g and h, we can consider equations (12.3.6.6) as the condition of orthogonality of the remainder R N to all weight

functions ψ k

The Galerkin method can be applied not only to boundary value problems, but also to

eigenvalue problems (in the latter case, one takes f = λy and seeks eigenfunctions y n,

together with eigenvalues λ n)

Mathematical justification of the Galerkin method for specific boundary value problems can be found in the literature listed at the end of Chapter 12 Below we describe some other methods that are in fact special cases of the Galerkin method

Remark. Most often, one takes suitable sequences of polynomials or trigonometric functions as ϕ n (x)

in the approximation function (12.3.6.3).

12.3.6-3 Bubnov–Galerkin method, the moment method, the least squares method

1 The sequences of functions (12.3.6.2) and (12.3.6.5) in the Galerkin method can be

chosen arbitrarily In the case of equal functions,

ϕ k (x) = ψ k (x) (k =1,2, , N ), (12.3.6.7)

the method is often called the Bubnov–Galerkin method.

2◦ The moment method is the Galerkin method with the weight functions (12.3.6.5) being

powers of x,

ψ k = x k. (12.3.6.8)

3◦ Sometimes, the functions ψ

k are expressed in terms of ϕ k by the relations

ψ k=F[ϕ k] (k =1,2, ),

where F is the differential operator of equation (12.3.6.1) This version of the Galerkin

method is called the least squares method.

12.3.6-4 Collocation method

In the collocation method, one chooses a sequence of points x k , k =1, , N , and imposes

the condition that the remainder (12.3.6.4) be zero at these points,

R N =0 at x = x k (k =1, , N ). (12.3.6.9)

When solving a specific problem, the points x k , at which the remainder R N is set equal

to zero, are regarded as most significant The number of collocation points N is taken equal

to the number of the terms of the series (12.3.6.3) This allows one to obtain a complete

system of algebraic equations for the unknown coefficients A n (for linear boundary value problems, this algebraic system is linear)

Note that the collocation method is a special case of the Galerkin method with the sequence (12.3.6.5) consisting of the Dirac delta functions:

ψ k = δ(x – x k).

In the collocation method, there is no need to calculate integrals, and this essentially simplifies the procedure of solving nonlinear problems (although usually this method yields less accurate results than other modifications of the Galerkin method)

Trang 4

Example Consider the boundary value problem for the linear variable-coefficient second-order ordinary

differential equation

y xx + g(x)y – f (x) =0 (12 3 6 10 ) subject to the boundary conditions of the first kind

y(– 1) = y(1 ) = 0 (12 3 6 11 )

Assume that the coefficients of equation (12.3.6.10) are smooth even functions, so that f (x) = f (–x) and

g (x) = g(–x) We use the collocation method for the approximate solution of problem (12.3.6.10)–(12.3.6.11).

1 Take the polynomials

y (x) = x2n–2( 1– x2), n= 1 , 2, N ,

as the basis functions; they satisfy the boundary conditions (12.3.6.11), y n( 1 ) = 0

Let us consider three collocation points

x1= –σ, x2= 0 , x3= σ ( 0< σ <1 ) (12 3 6 12 )

and confine ourselves to two basis functions (N =2 ), so that the approximation function is taken in the form

y (x) = A1( 1– x2) + A2x2( 1– x2) (12 3 6 13 ) Substituting (12.3.6.13) in the left-hand side of equation (12.3.6.10) yields the remainder

R (x) = A1 

– 2 + ( 1– x2)g(x)

+ A2 

2 – 12x2+ x2( 1– x2)g(x)

– f (x).

It must vanish at the collocation points (12.3.6.12) Taking into account the properties f (σ) = f (–σ) and

g (σ) = g(–σ), we obtain two linear algebraic equations for the coefficients A1and A2 :

A1

– 2+ g(0 ) 

+ 2A2 – f (0 ) = 0 (at x =0 ),

A1

– 2 + ( 1– σ2)g(σ)

+ A2

2 – 12σ2+ σ2( 1– σ2)g(σ)

– f (σ) =0 (at x = σ). (12.3 6 14 )

2 To be specific, let us take the following functions entering equation (12.3.6.10):

f (x) = –1 , g (x) =1+ x2 (12 3 6 15 )

On solving the corresponding system of algebraic equations (12.3.6.14), we find the coefficients

A1= σ

4 + 11

σ4+ 2σ2+ 11, A2= –

σ2

σ4+ 2σ2+ 11. (12.3.6.16)

In Fig 12.3, the solid line depicts the numerical solution to problem (12.3.6.10)–(12.3.6.11), with the functions (12.3.6.15), obtained by the shooting method (see Paragraph 12.3.7-3) The dashed lines 1 and 2 show the approximate solutions obtained by the collocation method using the formulas (12.3.6.13), (12.3.6.16)

with σ =12 (equidistant points) and σ = √22(Chebyshev points, see Subsection 12.4.4), respectively It is evident that both cases provide good coincidence of the approximate and numerical solutions; the use of Chebyshev points gives a more accurate result.

0.5 √2

√2

2 2

0.5

1

2

y

x O

Figure 12.3 Comparison of the numerical solution of problem (12.3.6.10), (12.3.6.11), (12.3.6.15) with the

approximate analytical solution (12.3.6.13), (12.3.6.16) obtained with the collocation method.

Remark The theorem of convergence of the collocation method for linear boundary value problems is

given in Subsection 12.4.4, where nth-order differential equations are considered.

Trang 5

12.3.6-5 Method of partitioning the domain.

The domain V = {x1 ≤ xx2} is split into N subdomains: V k = {x k1 ≤ xx k2},

k=1, , N In this method, the weight functions are chosen as follows:

ψ k (x) =

1 for x

V k,

0 for xV k.

The subdomains V k are chosen according to the specific properties of the problem under

consideration and can generally be arbitrary (the union of all subdomains V k may differ

from the domain V , and some V k and V m may overlap)

12.3.6-6 Least squared error method

Sometimes, in order to find the coefficients A n of the approximation function (12.3.6.3), one uses the least squared error method based on the minimization of the functional:

Φ = x2

x1 R2

N dx → min (12.3.6.17)

For given functions ϕ n in (12.3.6.3), the integral Φ is a function with respect to the

coefficients A n The corresponding necessary conditions of minimum in (12.3.6.17) have the form

Φ

∂A n =0 (n =1, , N ).

This is a system of algebraic (transcendental) equations for the coefficients A n

12.3.7 Iteration and Numerical Methods

12.3.7-1 Method of successive approximations (Cauchy problem)

The method of successive approximations is implemented in two steps First, the Cauchy problem

y 

xx = f (x, y, y x ) (equation), (12.3.7.1)

y(x0) = y0, y 

x (x0) = y 0 (initial conditions) (12.3.7.2)

is reduced to an equivalent system of integral equations by the introduction of the new

variable u(x) = y x  These integral equations have the form

u(x) = y 0+ x

x0f t, y(t), u(t)

dt, y(x) = y0+ x

x0u(t) dt. (12.3.7.3) Then the solution of system (12.3.7.3) is sought by means of successive approximations defined by the following recurrence formulas:

u n+1(x) = y 0+

 x

x0f t, y n (t), u n (t)

dt, y n+1(x) = y0+ x

x0u n (t) dt; n=0, 1, 2,

As the initial approximation, one can take y0(x) = y0and u0(x) = y 0

Trang 6

12.3.7-2 Runge–Kutta method (Cauchy problem).

For the numerical integration of the Cauchy problem (12.3.7.1)–(12.3.7.2), one often uses the Runge–Kutta method

Let Δx be sufficiently small We introduce the following notation:

x k = x0+ k Δx, y k = y(x k), y 

k = y x  (x k), f k = f (x k , y k , y k ); k=0,1, 2, The desired values y k and y  k are successively found by the formulas

y k+1= y k + y  k Δx + 16(f1+ f2+ f3)(Δx)2,

y  k+1= y k  + 16(f1+2f2+2f3+ f4)Δx,

where

f1= f x k , y k , y  k

,

f2= f x k+ 12Δx, y k+ 12y 

k Δx, y 

k+ 12f1Δx ,

f3= f x k+ 12Δx, y k+ 12y 

k Δx + 14f1(Δx)2, y k  + 12f2Δx ,

f4= f x k+Δx, y k + y k  Δx + 12f2(Δx)2, y k  + f3Δx

In practice, the step Δx is determined in the same way as for first-order equations (see

Remark 2 in Paragraph 12.1.10-3)

12.3.7-3 Shooting method (boundary value problems)

In order to solve the boundary value problem for equation (12.3.7.1) with the boundary conditions

y(x1) = y1, y(x2) = y2, (12.3.7.4) one considers an auxiliary Cauchy problem for equation (12.3.7.1) with the initial conditions

y(x1) = y1, y 

x (x1) = a. (12.3.7.5) (The solution of this Cauchy problem can be obtained by the Runge–Kutta method or some

other numerical method.) The parameter a is chosen so that the value of the solution

y = y(x, a) at the point x = x2 coincides with the value required by the second boundary condition in (12.3.7.4):

y(x2, a) = y2

In a similar way one constructs the solution of the boundary value problem with mixed boundary conditions

y(x1) = y1, y 

x (x2) + ky(x2) = y2. (12.3.7.6)

In this case, one also considers the auxiliary Cauchy problem (12.3.7.1), (12.3.7.5) The

parameter a is chosen so that the solution y = y(x, a) satisfies the second boundary condition

in (12.3.7.6) at the point x = x2

Trang 7

12.3.7-4 Method of accelerated convergence in eigenvalue problems.

Consider the Sturm–Liouville problem for the second-order nonhomogeneous linear equa-tion

[f (x)y x ] x + [λg(x) – h(x)]y =0 (12.3.7.7) with linear homogeneous boundary conditions of the first kind

y(0) = y(1) =0 (12.3.7.8)

It is assumed that the functions f , f x  , g, h are continuous and f >0, g >0

First, using the Rayleigh–Ritz principle, one finds an upper estimate for the first

eigen-value λ01 [this value is determined by the right-hand side of relation (12.2.5.6)] Then, one solves numerically the Cauchy problem for the auxiliary equation

[f (x)y  x] x + [λ01g(x) – h(x)]y =0 (12.3.7.9) with the boundary conditions

y(0) =0, y 

x(0) =1 (12.3.7.10)

The function y(x, λ01) satisfies the condition y(x0, λ01) =0, where x0<1 The criterion of

closeness of the exact and approximate solutions, λ1 and λ01, has the form of the inequality

|1– x0| ≤δ, where δ is a sufficiently small given constant If this inequality does not hold,

one constructs a refinement for the approximate eigenvalue on the basis of the formula

λ1

1= λ01– ε0f(1)[y  x(1)]2

y2 , ε0 =1– x0, (12.3.7.11) where y2 =

 1

0 g(x)y

2(x) dx Then the value λ1

1 is substituted for λ01 in the Cauchy

problem (12.3.7.9)–(12.3.7.10) As a result, a new solution y and a new point x1 are found; and one has to check whether the criterion |1– x1| ≤δ holds If this inequality is violated, one refines the approximate eigenvalue by means of the formula

λ2

1= λ11– ε1f(1)[y  x(1)]2

y2 , ε1 =1– x1, (12.3.7.12) and repeats the above procedure

Remark 1 Formulas of the type (12.3.7.11) are obtained by a perturbation method based on a

transfor-mation of the independent variable x (see Paragraph 12.3.5-2) If x n> 1, the functions f , g, and h are

smoothly extended to the interval ( 1, ξ], where ξx n.

Remark 2. The algorithm described above has the property of accelerated convergence ε n+1 ∼ ε2

n, which ensures that the relative error of the approximate solution becomes 10 – 4 to 10 – 8 after two or three iterations

for ε00 1 This method is quite effective for high-precision calculations, is fail-safe, and guarantees against accumulation of roundoff errors.

Remark 3. In a similar way, one can compute subsequent eigenvalues λ m , m =2 , 3, (to that end, a suitable initial approximation λ0m should be chosen).

Remark 4 A similar computation scheme can also be used in the case of boundary conditions of the second and the third kinds, periodic boundary conditions, etc (see the reference below).

Ngày đăng: 02/07/2014, 13:20

🧩 Sản phẩm bạn có thể quan tâm