A homogeneous Euler system is a homogeneous linear system of ordinary differential equations composed by linear combinations of the following terms: y k, xy k, x2y k ,.. A solution also
Trang 112.6.2-3 Euler system of ordinary differential equations.
1◦ A homogeneous Euler system is a homogeneous linear system of ordinary differential
equations composed by linear combinations of the following terms:
y k, xy
k, x2y k , ., x m k y(k m k); k=1,2, , n.
Such a system is invariant under scaling in the independent variable (i.e., it preserves its form
under the change of variable x → αx, where α is any nonzero number) A nonhomogeneous
Euler system contains additional terms, given functions
The substitution x = be t (b≠ 0) brings an Euler system, both homogeneous and nonho-mogeneous, to a constant-coefficient linear system of equations
Example In general, a nonhomogeneous Euler system of second-order equations has the form
n
k=1
a mk x2d
2y
k
dx2 + b mk x dy k
dx + c mk y k
= f m (x), m= 1, 2, , n (12.6.2.8)
The substitutions x = e tbring this system to a constant-coefficient linear system,
n
k=1
a mk d
2y
k
dt2 + (b mk – a mk)dy k
dt + c mk y k
= f m ( e t), m= 1, 2, , n, which can be solved using, for example, the Laplace transform (see Example 4 from Paragraph 12.6.1-6).
2◦ Particular solutions to a homogeneous Euler system (for system (12.6.2.8), correspond-ing to f m (x)≡ 0) are sought in the form of power functions:
y1= A1x σ, y2= A2x σ, ., y
n = A n x σ, (12.6.2.9)
where the coefficients A1, A2, , A nare determined by solving the associated homoge-neous system of algebraic equations obtained by substituting expressions (12.6.2.9) into
the differential equations of the system in question and dividing by x σ Since the system is homogeneous, for it to have nontrivial solutions, its determinant must vanish This results
in a dispersion equation for the exponent σ.
12.7 Nonlinear Systems of Ordinary Differential
Equations
12.7.1 Solutions and First Integrals Uniqueness and Existence
Theorems
12.7.1-1 Systems solved for the derivative A solution and the general solution
We will be dealing with a system of first-order ordinary differential equations solved for the derivatives
y
k = f k (x, y1, , y n), k=1, , n. (12.7.1.1) Here and henceforth throughout the current chapter, the prime denotes a derivative with
respect to the independent variable x.
A set of numbers x, y1, , y n is convenient to treat as a point in the (n +1)-dimensional space
For brevity, system (12.7.1.1) is conventionally written in vector form:
y = f(x, y), where y and f are the vectors defined as y = (y1, , y n)T and f = (f1, , f n)T
Trang 2A solution (also an integral or an integral curve) of a system of differential equations (12.7.1.1) is a set of functions y1= y1(x), , y n = y n (x) such that, when substituted into all equations (12.7.1.1), they turn them into identities The general solution of a system of
differential equations is the set of all its solutions In the general case, the general solution
of system (12.7.1.1) depends on n arbitrary constants.
12.7.1-2 Existence and uniqueness theorems
THEOREM(EXISTENCE, PEANO) Let the functions f k (x, y1, , y n) (k =1, , n) be continuous in a domain G of the (n +1)-dimensional space of the variables x, y1, , y n
Then there is at least one integral curve passing through every point M (x ◦ , y1◦ , , y ◦ n)in G.
Each of such curves can be extended on both ends up to the boundary of any closed domain
completely belonging to G and containing the point M inside.
Remark. If there is more than one integral curve passing through the point M , there are infinitely many integral curves passing through M
THEOREM(UNIQUENESS) There is a unique integral curve passing through the point
M (x ◦ , y1◦ , , y n ◦)if the functions f k have partial derivatives with respect to all y m,
con-tinuous in x, y1, , y n in the domain G, or if each function f k in G satisfies the Lipschitz
condition:
|fk (x, ¯y1, , ¯yn ) – f k (x, y1, , y n)| ≤A
n
m=1
|¯ym – y m|,
where A is some positive number.
12.7.1-3 Reduction of systems of equations to a single equation
Suppose the right-hand sides of equations (12.7.1.1) are n times differentiable in all vari-ables Then system (12.7.1.1) can be reduced to a single nth-order equation Indeed, using the chain rule, let us differentiate the first equation of system (12.7.1.1) with respect to x to
get
y
1 = ∂f ∂x1
1 +
∂f1
∂y1y
1+· · · + ∂f1
∂y n y
n. (12.7.1.2)
Then change the first derivatives y k in (12.7.1.2) to f k (x, y1, , y n) [the right-hand sides
of equations (12.7.1.1)] to obtain
y
1 = F2(x, y1, , y n), (12.7.1.3)
where F2(x, y1, , y n)≡ ∂f1
∂x1+
∂f1
∂y1f1+· · ·+ ∂f1
∂y n f n Now differentiate equation (12.7.1.3) with respect to x and replace the first derivatives y kon the right-hand side of the resulting
equation by f k As a result, we obtain
y
1 = F3(x, y1, , y n), where F3(x, y1, , y n)≡ ∂F2
∂x1 +
∂F2
∂y1f1+· · · + ∂F2
∂y n f n Repeating this procedure as many
times as required, one arrives at the following system of equations:
y
1= F1(x, y1, , y n),
y
1 = F2(x, y1, , y n),
y(n)
1 = F n (x, y1, , y n),
Trang 3F1(x, y1, , y n)≡f1(x, y1, , y n), F k+1(x, y1, , y n)≡ ∂F k
∂x1 +
∂F k
∂y1f1+· · ·+ ∂F k
∂y n f n. Expressing y2, y3, , y n from the n –1first equations of this system in terms of x, y1, y 1,
, y(1n–1) and then substituting the resulting expressions into the last equation of system
(12.7.1.1), one finally arrives at an nth-order equation:
y(n)
1 =Φ(x, y1, y 1, , y(1n–1)). (12.7.1.4) Remark 1 If (12.7.1.1) is a linear system of first-order differential equations, then (12.7.1.4) is a linear
nth-order equation.
Remark 2. Any equation of the form (12.7.1.4) can be reduced to a system on n first-order equations (see
Paragraph 12.5.1-4).
12.7.1-4 First integrals Using them to reduce system dimension
1◦ A relation of the form
where C is an arbitrary constant, is called a first integral of system (12.7.1.1) if its left-hand
sideΦ, generally not identically constant, is turned into a constant by any particular solution,
y1, , y n, of system (12.7.1.1) In the sequel, we consider only continuously differentiable
functionsΨ(x, y1, , y n) in a given domain of variation of its arguments.
THEOREM An expression of the form (12.7.1.5) is a first integral of system (12.7.1.1)
if and only if the functionΨ = Ψ(x, y1, , y n)satisfies the relation
∂Ψ
∂x +
n
k=1
∂Ψ
∂y k f k (x, y1, , y n) =0
This relation may be treated as a first-order partial differential equation forΨ
Different first integrals of system (12.7.1.1) are called independent if the Jacobian of
their left-hand sides is nonzero
System (12.7.1.1) admits n independent first integrals if the conditions of the uniqueness
theorem from Paragraph 12.7.1-2 are met
2◦ Given a first integral (12.7.1.5) of system (12.7.1.1), it may be treated as an implicit specification of one of the unknowns Solving (12.7.1.5), for example, for y n yields
y n = G(x, y1, , y n–1) Substituting this expression into the first n –1equations of system
(12.7.1.1), one obtains a system in n –1variables with one arbitrary constant
Likewise, given m independent first integrals of system (12.7.1.1),
Ψk (x, y1, , y n ) = C k, k=1, , m (m < n), the system may be reduced to a system of n – m first-order equations in n – m unknowns.
Trang 412.7.2 Integrable Combinations Autonomous Systems
of Equations
12.7.2-1 Integrable combinations
In some cases, first integrals of systems of differential equations may be obtained by finding
integrable combinations An integrable combination is a differential equation that is easy
to integrate and is a consequence of the equations of the system under consideration Most commonly, an integrable combination is an equation of the form
or an equation reducible by a change of variables to one of the integrable types of equations
in one unknown
Example 1 Consider the nonlinear system
ay1 = (b – c)y2 y3, by 2= (c – a)y1 y3, cy3 = (a – b)y1 y2, (12.7.2.2)
where a, b, and c are some constants Such systems arise in the theory of motion of a rigid body.
Let us multiply the first equation by y1, the second by y2, and the third by y3and add together to obtain
ay1y1 + by2y 2+ cy3y3= 0 =⇒ d(ay2+ by2+ cy2) = 0.
Integrating yields a first integral:
Now multiply the first equation of the system by ay1, the second by by2, and the third by cy3 and add together to obtain
a2y1y 1+ b2y2y 2+ c2y3y3= 0 =⇒ d(a2y2+ b2y2+ c2y2) = 0.
Integrating yields another first integral:
a2y2+ b2y2+ c2y2= C2. (12.7.2.4)
If the case a = b = c, where system (12.7.2.2) can be integrated directly, does not take place, the above two first integrals (12.7.2.3) and (12.7.2.4) are independent Hence, using them, one can express y2 and y3in terms
of y1and then substitute the resulting expressions into the first equation of system (12.7.2.2) As a result, one
arrives at a single separable first-order differential equation for y1.
In this example, the integrable combinations have the form (12.7.2.1).
Example 2 A specific example of finding an integrable combination reducible with a change of variables
to a simpler, integrable equation in one unknown can be found in Paragraph 12.6.1-5.
12.7.2-2 Autonomous systems and their reduction to systems of lower dimension
1◦ A system of equations is called autonomous if the right-hand sides of the equations do not depend explicitly on x In general, such systems have the form
y
k = f k (y1, , y n), k=1, , n. (12.7.2.5)
If y(x) is a solution of the autonomous system (12.7.2.5), then the function y(x + C),
where C is an arbitrary constant, is also a solution of this system.
A point y◦ = (y1◦ , , y ◦ n ) is called an equilibrium point (or a stationary point) of the
autonomous system (12.7.2.5) if
f k (y1◦ , , y n ◦) =0, k=1, , n.
To an equilibrium point there corresponds a special, simplest particular solution when all unknowns are constant:
y1= y1◦, ., y n = y n ◦, k=1, , n.
2◦ Any n-dimensional autonomous system of the form (12.7.2.5) can be reduced to an (n –1)-dimensional system of equations independent of x To this end, one should select one of the equations and divide the other n –1equations of the system by it
Trang 5Example 3 The autonomous system of two first-order equations
y x = f1(y, z), z x = f2(y, z) (12.7.2.6)
is reduced by dividing the first equation by the second to a single equation for y = y(z):
y z= f1(y, z)
If the general solution of equation (12.7.2.7) is obtained in the form
then z = z(x) is found in implicit form from the second equation in (12.7.2.6) by quadrature:
dz
Formulas (12.7.2.8)–(12.7.2.9) determine the general solution of system (12.7.2.6) with two arbitrary constants,
C1and C2.
Remark. The dependent variables y and z in the autonomous system (12.7.2.6) are often called phase
variables; the plane y, z they form is called a phase plane, which serves to display integral curves of equation
(12.7.2.7).
12.7.3 Elements of Stability Theory
12.7.3-1 Lyapunov stability Asymptotic stability
1◦ In many applications, time t plays the role of the independent variable, and the associated
system of differential equations is conventionally written in the following notation:
x
k = f k (t, x1, , x n), k=1, , n. (12.7.3.1)
Here the x k = x k (t) are unknown functions that may be treated as coordinates of a moving point in an n-dimensional space.
Let us supply system (12.7.3.1) with initial conditions
x k = x ◦ k at t = t ◦ (k =1, , n). (12.7.3.2) Denote by
x k = ϕ k (t; t ◦ , x ◦1, , x ◦ n), k=1, , n, (12.7.3.3) the solution of system (12.7.3.1) with the initial conditions (12.7.3.2)
A solution (12.7.3.3) of system (12.7.3.1) is called Lyapunov stable if for any ε >0there
exists a δ >0such that if
|x◦
k–2x ◦
k|< δ, k=1, , n, (12.7.3.4)
then the following inequalities hold for t ◦ ≤t<∞:
ϕ k (t; t ◦ , x ◦
1, , x ◦ n ) – ϕ k (t; t ◦,2x ◦1, , 2x ◦ n) < ε, k=1, , n.
Any solution which is not stable is called unstable Solution (12.7.3.3) is called unper-turbed and the solution ϕ k (t; t ◦,2x ◦
1, , 2x ◦
n) is called perturbed Geometrically, Lyapunov stability means that the trajectory of the perturbed solution stays at all times t≥t ◦within a
small neighborhood of the associated unperturbed solution
2◦ A solution (12.7.3.3) of system (12.7.3.1) is called asymptotically stable if it is Lyapunov
stable and, in addition, with inequalities (12.7.3.4) met, satisfies the conditions
lim
t→∞ϕ k (t; t ◦ , x ◦
1, , x ◦ n ) – ϕ k (t; t ◦,2x ◦1, , 2x ◦ n)=0, k=1, , n. (12.7.3.5) Remark Condition (12.7.3.5) alone does not suffice for the solution to be Lyapunov stable.
3◦ In stability analysis, it is normally assumed, without loss of generality, that t ◦ = x ◦
1 =
· · · = x ◦
n = 0(this can be achieved by shifting each of the variables by a constant value) Further, with the changes of variables
z k = x k – ϕ k (t; t ◦ , x ◦1, , x ◦ n) (k =1, , n), the stability analysis of any solution is reduced to that of the zero solution z1=· · · = z n=0
Trang 612.7.3-2 Theorems of stability and instability by first approximation.
In studying stability of the trivial solution x1=· · · = x n=0of system (12.7.3.1) the following method is often employed The right-hand sides of the equations are approximated by the principal (linear) terms of the expansion into Taylor series about the equilibrium point:
f k (t, x1, , x n)≈a k1(t)x1+· · ·+a kn (t)x n, a km (t) = ∂x ∂f k
m
x1 =···=x n 0
, k=1, , n.
Then a stability analysis of the resulting simplified, linear system is performed The question arises: Is it possible to draw correct conclusions about the stability of the original nonlinear system (12.7.3.1) from the analysis of the linearized system? Two theorems stated below give a partial answer to this question
THEOREM(STABILITY BY FIRST APPROXIMATION) Suppose in the system
x
k = a k1x1+· · · + a kn x n + ψ k (t, x1, , x n), k=1, , n, (12.7.3.6)
the functions ψ k are defined and continuous in a domain t≥ 0,|xk| ≤b (k =1, , n) and,
in addition, the inequality
n
k=1
|ψk| ≤A
n
k=1
|xk| (12.7.3.7)
holds for some constant A In particular, this implies that ψ k (t,0, ,0) =0, and therefore
x1=· · · = x n=0 (12.7.3.8)
is a solution of system (12.7.3.6) Suppose further that
n
k=1
|ψk|
n
k=1
|xk|
n
k=1
|xk|→0 and t → ∞, (12.7.3.9)
and the real parts of all roots of the characteristic equation
det|aij – λδ ij|=0, δ ij =
1 if i = j,
0 if i≠j (12.7.3.10) are negative Then solution (12.7.3.8) is stable
Remark 1 Necessary and sufficient conditions for the real parts of all roots of the characteristic equation (12.7.3.10) to be negative are established by Hurwitz’s theorem, which allows avoiding its solution.
Remark 2. In the above system, the a ij , x k , and ψ kmay be complex valued.
THEOREM(INSTABILITY BY FIRST APPROXIMATION) Suppose conditions (12.7.3.7) and
(12.7.3.9) are met and the conditions for the functions ψ k from the previous theorem are also met If at least one root of the characteristic equation (12.7.3.10) has a positive real part, then the equilibrium point (12.7.3.8) of system (12.7.3.6) is unstable
Example 1 Consider the following two-dimensional system of the form (12.7.3.6) with real coefficients:
x t = a11 x + a12 y + ψ1(t, x, y),
y t = a21x + a22y + ψ2(t, x, y). (12.7.3.11)
Trang 7We assume that the functions ψ1and ψ2satisfy conditions (12.7.3.7) and (12.7.3.9).
The characteristic equation of the linearized system (obtained by setting ψ1 = ψ2= 0) is given by
λ2– bλ + c =0, where b = a11 + a22, c = a11 a22– a12 a21 (12.7.3.12)
1 Using the theorem of stability by first approximation and examining the roots of the quadratic equation (12.7.3.12), we obtain two sufficient stability conditions for system (12.7.3.11):
b< 0, 0 < 14b2< c (complex roots with negative real part);
b< 0, 0< c < 14b2 (negative real roots).
The two conditions can be combined into one:
b< 0, c> 0, or a11+ a22< 0, a11a22– a12a21> 0.
These inequalities define the second quadrant in the plane b, c; see Fig 12.10.
Stability undecided
c
b O
Domain of instability
Domain of instability Domain of stability
Stability undecided I
IV III
II
Figure 12.10 Domains of stability and instability of the trivial solution of system (12.7.3.11).
2 Using the theorem of instability by first approximation and examining the roots of the quadratic equation (12.7.3.12), we get three sufficient instability conditions for system (12.7.3.11):
b> 0, 0 < 14b2< c (complex roots with positive real part);
b> 0, 0< c < 14b2 (positive real roots);
c< 0, b is any (real roots with different signs).
The first two conditions can be combined into one:
b> 0, c> 0, or a11+ a22> 0, a11a22– a12 a21 > 0.
The domain of instability of system (12.7.3.11) covers the first, third, and fourth quadrants in the plane b, c
(shaded in Fig 12.10).
3 The conditions obtained above in Items 1 and 2 do not cover the whole domain of variation of the
parameters a ij Stability or instability is not established for the boundary of the second quarter (shown by thick solid line in Fig 12.10) This corresponds to the cases
b= 0, c≥ 0 (two pure imaginary or two zero roots);
c= 0, b≤ 0 (one zero root and one negative real or zero root).
Specific examples of such systems are considered below in Paragraph 12.7.3-3.
Remark When the conditions of Item 1 or 2 hold, the phase trajectories of the nonlinear system (12.7.3.11)
have the same qualitative arrangement in a neighborhood of the equilibrium point x = y =0 as that of the phase
trajectories of the linearized system (with ψ2 = ψ1= 0) A detailed classification of equilibrium points of linear systems with associated arrangements of the phase trajectories can be found in Paragraph 12.6.1-7.