1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Approximation of spectral intervals and leading directions for differential-algebraic equation via smooth singular value decompositions

26 136 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 26
Dung lượng 336,44 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

This paper is devoted to the numerical approximation of Lyapunov and Sacker–Sell spectral intervals for linear differential-algebraic equations DAEs.. The spectral analysis for DAEs is im

Trang 1

APPROXIMATION OF SPECTRAL INTERVALS AND LEADING DIRECTIONS FOR DIFFERENTIAL-ALGEBRAIC EQUATION VIA

VU HOANG LINH AND VOLKER MEHRMANN

Abstract This paper is devoted to the numerical approximation of Lyapunov and Sacker–Sell

spectral intervals for linear differential-algebraic equations (DAEs) The spectral analysis for DAEs

is improved and the concepts of leading directions and solution subspaces associated with spectral intervals are extended to DAEs Numerical methods based on smooth singular value decomposi- tions are introduced for computing all or only some spectral intervals and their associated leading directions The numerical algorithms as well as implementation issues are discussed in detail and numerical examples are presented to illustrate the theoretical results.

Key words differential-algebraic equation, strangeness index, Lyapunov exponent, Bohl

ex-ponent, Sacker–Sell spectrum, exponential dichotomy, spectral interval, leading direction, smooth singular value decomposition

AMS subject classifications 65L07, 65L80, 34D08, 34D09 DOI 10.1137/100806059

1 Introduction In this paper, we study the spectral analysis for linear

differen-tial-algebraic equations (DAEs) with variable coefficients

on the half-line I = [0, ∞), together with an initial condition x(0) = x0 Here we

assume that E, A ∈ C(I, R n×n ) and f ∈ C(I, R n) are sufficiently smooth We use

the notation C(I, R n×n) to denote the space of continuous functions fromI to Rn×n.Linear systems of the form (1.1) arise when one linearizes a general implicit nonlinearsystem of DAEs

along a particular solution [11]

DAEs are an important and convenient modeling concept in many different plication areas; see [8, 23, 26, 27, 37] and the references therein However, manynumerical difficulties arise due to the fact that the dynamics is constrained to a man-ifold, which often is only given implicitly; see [27, 36, 37]

ap-Similar to the situation of constant coefficient systems, where the spectral theory

is based on eigenvalues and associated eigenvectors or invariant subspaces, in the able coefficient case one is interested in the spectral intervals and associated leadingdirections, i.e., the initial vectors that lead to specific spectral intervals We introduce

vari-∗Received by the editors August 20, 2010; accepted for publication (in revised form) June 24, 2011;

published electronically September 15, 2011 This research was supported by Deutsche

Forschungs-gemeinschaft, through Matheon, the DFG Research Center “Mathematics for Key Technologies” in

Berlin.

http://www.siam.org/journals/sinum/49-5/80605.html

Faculty of Mathematics, Mechanics and Informatics, Vietnam National University, 334, Nguyen

Trai Str., Thanh Xuan, Hanoi, Vietnam (linhvh@vnu.edu.vn) This author’s work was supported by the Alexander von Humboldt Foundation and partially by VNU’s Project QG 10-01.

Institut f¨ur Mathematik, MA 4-5, Technische Universit¨at Berlin, D-10623 Berlin, Germany

(mehrmann@math.tu-berlin.de).

1810

Trang 2

these concepts for DAEs and develop numerical methods for computing this spectralinformation on the basis of smooth singular value decompositions associated with thehomogeneous version of (1.1).

The numerical approximation of Lyapunov exponents for ordinary differentialequations (ODEs) has been investigated widely; see, e.g [3, 4, 6, 9, 12, 19, 20, 21, 24,25] and the references therein Recently, in [31, 33], the classical spectral theory forODEs such as Lyapunov, Bohl, and Sacker–Sell intervals (see [1] and the referencestherein) was extended to DAEs It was shown that there are substantial differences

in the theory and that most results for ODEs hold for DAEs only under further

restrictions In [31, 33] also the numerical methods (based on QR factorization) for

computing spectral quantities of ODEs of [20, 22], were extended to DAEs

In this paper, motivated by the results in [17, 18] for ODEs, we present a acterization for the leading directions and solution subspaces associated with thespectral intervals of (1.1) Using the approach of [33], we also discuss the extension ofrecent methods introduced in [17, 18] to DAEs These methods compute the spectralintervals of ODEs and their associated leading directions via smooth singular valuedecompositions (SVDs) Under an integral separation condition, we show that theseSVD based methods apply directly to DAEs Most of the theoretical results as well

char-as the numerical methods are direct generalizations of [17] but, furthermore, we also

prove that the limit (as t tends to infinity) of the V -component in the smooth SVD

of any fundamental solution provides not only a normal basis, but also an integrallyseparated fundamental solution matrix, see Theorem 4.11 This significantly improvesTheorem 5.14 and Corollary 5.15 in [17]

The outline of the paper is as follows In the following section, we revisit thespectral theory of differential-algebraic equations that was developed in [31] In sec-tion 3 we extend the concepts of leading directions and growth subspaces associatedwith spectral intervals to DAEs In section 4, we propose continuous SVD methodsfor approximating the spectral intervals and leading directions Algorithmic detailsand comparisons of the methods are discussed as well Finally, in section 5 somenumerical experiments are presented to illustrate the theoretical results as well as theefficiency of the SVD method

2 Spectral theory for strangeness-free DAEs.

2.1 Strangeness-free DAEs General linear DAEs with variable coefficients

have been studied in detail in the last twenty years; see [27] and the references therein

In order to understand the solution behavior and to obtain numerical solutions, thenecessary information about derivatives of equations has to be used This has led tothe concept of the strangeness index, which under very mild assumptions allows one

to use the DAE and (some of) its derivatives to be reformulated as a system withthe same solution that is strangeness-free, i.e., for which the algebraic and differentialpart of the system are easily separated

In this paper for the discussion of spectral intervals, we restrict ourselves to regular

DAEs, i.e., we require that (1.1) (or (1.2) locally) has a unique solution for sufficiently

smooth E, A, f (F ) and appropriately chosen (consistent) initial conditions; see again

[27] for a discussion of existence and uniqueness of solution of more general nonregularDAEs

With this theory and appropriate numerical methods available, then for lar DAEs we may assume that the homogeneous DAE in consideration is already

Trang 3

strangeness-free and has the form

row-supposed to be sufficiently smooth so that the convergence results for the numerical

methods [27] applied to (2.1) hold It is then easy to see that an initial vector x0 ∈ R n

is consistent for (2.1) if and only if A2(0)x0 = 0; i.e., if x0 satisfies the algebraicequation

The following lemma, which can be viewed as a generalized Schur form for matrixfunctions, is the key to the theory and numerical methods for the computation ofspectral intervals for DAEs It is a slight modification of [31, Lemma 7] using alsodifferent notation to avoid confusion with later sections

Lemma 2.1 Consider a strangeness-free DAE system of the form (2.1) with continuous coefficients E, A Let ˆ U ∈ C1(I, R n×d ) be an arbitrary orthonormal basis

of the solution subspace of (2.1) Then there exists a matrix function ˆ P ∈ C(I, R n×d)

with pointwise orthonormal columns such that by the change of variables x = ˆ U z and multiplication of both sides of (2.1) from the left by ˆ P T , one obtains the system

where E := ˆ P T E ˆ U, A := ˆ P T A ˆ U − ˆ P T E U , and E is upper triangular.˙ˆ

Proof Considering an arbitrary solution x and substituting x = ˆ U z into (2.1),

we obtain

Since (2.1) is strangeness-free, and since A2 U = 0, we have that the matrix E ˆˆ U must

have full column-rank Thus (see [16]) there exists a smooth QR-decomposition

E ˆ U = ˆ P E,

where the columns of ˆP form an orthonormal set and E is nonsingular and upper

triangular This decomposition is unique if the diagonal elements of E are chosen

positive Multiplying both sides of (2.4) by ˆP T, we arrive at

E ˙z = [ ˆ P T A ˆ U − ˆ P T E U ]z.˙ˆFinally, settingA := ˆ P T A ˆ U − ˆ P T E U completes the proof.˙ˆ

System (2.3) is an implicitly given ODE, since E is nonsingular It is called an essentially underlying implicit ODE system (EUODE) of (2.1) and it can be made

Trang 4

explicit by multiplying withE −1from the left; see also [2] for constructing EUODEs of

so-called properly stated DAEs In our numerical methods we will need to constructthe coefficients of the EUODE pointwise Note, however, that in (2.4) for a fixedgiven ˆU , the matrix function ˆ P is not unique In fact, any ˆ P for which ˆ P T E ˆ U is

invertible yields an implicit EUODE However, obviouslyE −1 A is unique, i.e., with a

given basis, the explicit EUODE provided by Lemma 2.1 is unique In the numericalmethods, however, we need to choose the matrix function ˆP appropriately.

For the theoretical analysis we will heavily use the fact that for a given basis ˆU ,

the correspondence between the solutions of (2.1) and those of (2.3) is one to one; i.e.,

x is a solution of (2.1) if and only if z = ˆ U T x is a solution of (2.3) Different special

choices of the basis ˆU will, however, lead to different methods for approximating

Lyapunov exponents Note that ˆU ˆ U T is just a projection onto the solution subspace

of (2.1), hence z = ˆ U T x implies ˆ U z = ˆ U ˆ U T x = x.

2.2 Lyapunov exponents and Lyapunov spectral intervals In the

fol-lowing we briefly recall the basic concepts of the spectral theory for DAEs; see [31]for details

Definition 2.2 A matrix function X ∈ C1(I, R n×k ), d ≤ k ≤ n, is called a fundamental solution matrix of the strangeness-free DAE (2.1) if each of its columns

is a solution to (2.1) and rank X(t) = d for all t ≥ 0 A fundamental solution matrix

is said to be minimal if k = d.

One may construct a minimal fundamental matrix solution by solving initial value

problems for (2.1) with d linearly independent, consistent initial vectors For example, let Q0 ∈ R n×n be a nonsingular matrix such that A2(0)Q0 = 

we define upper and lower Lyapunov exponents for vector valued functions, where the absolute values are replaced by norms.

For a constant c = 0 and nonvanishing functions f1 , , f j, Lyapunov exponentssatisfy

Trang 5

where e i denotes the ith unit vector and ||·|| denotes the Euclidean norm The columns

of a minimal fundamental solution matrix form a normal basis if Σ d

i=1 λ u

i is minimal The λ u

i , i = 1, 2, , d, belonging to a normal basis are called (upper) Lyapunov

exponents and the intervals [λ 

Definition 2.5 Suppose that P ∈ C(I, R n×n ) and Q ∈ C1(I, R n×n ) are

non-singular matrix functions such that Q and Q −1 are bounded Then the transformed DAE system

˜

E(t) ˙˜ x = ˜ A(t)˜ x, with ˜ E = P EQ, ˜ A = P AQ − P E ˙ Q, and x = Q˜ x, is called globally kinematically

equivalent to (2.1) and the transformation is called a global kinematic equivalence transformation If P ∈ C1(I, R n×n ) and, furthermore, also P and P −1 are bounded, then we call this a strong global kinematic equivalence transformation.

The Lyapunov exponents of a DAE system as well as the normality of a basisformed by the columns of a fundamental solution matrix are preserved under globalkinematic equivalence transformations

Proposition 2.6 For any given minimal fundamental matrix X of (2.1), for which the Lyapunov exponents of the columns are ordered decreasingly, there exist a constant, nonsingular, and upper triangular matrix C ∈ R d×d such that the columns

of XC form a normal basis for (2.1).

Proof Since orthonormal changes of basis keep the Euclidean norm invariant, the

spectral analysis of (2.1) can be done via its EUODE Thus, let Z be the corresponding fundamental matrix of (2.3), X = ˆ U Z Due to the existence result of a normal basis

for ODEs [34] (see also [1, 20]), there exists a matrix C with the properties listed in the assertion such that ZC is a normal basis for (2.3) Thus XC = ˆ U ZC is a normal

basis for (2.1)

The fundamental solutions X and Z satisfy the following relation.

Theorem 2.7 (see [31]) Let X be a normal basis for (2.1) Then the Lyapunov

spectrum of the DAE (2.1) and that of the ODE (2.3) are the same If E, A are as

in (2.3) and if E −1 A is bounded, then all the Lyapunov exponents of (2.1) are finite Furthermore, the spectrum of (2.3) does not depend on the choice of the basis ˆ U and the matrix function ˆ P

Similar to the regularity concept for DAEs introduced in [14], we have the ing definition

follow-Definition 2.8 The DAE system (2.1) is said to be Lyapunov-regular if its EUODE (2.3) is Lyapunov-regular; i.e., if

The Lyapunov-regularity of a strangeness-free DAE system (2.1) is well defined,since it does not depend on the construction of (2.3) Furthermore, the Lyapunov-

regularity of (2.1) implies that for any nontrivial solution x, the limit lim t→∞1

tln||x(t)||

exists Hence, we have λ l = λ u

i, i.e., the Lyapunov spectrum of (2.1) is a point trum

Trang 6

We stress that unlike the approach in [14], where certain inherent ODEs of thesame size as the original DAE are used, our spectral analysis is based on the essentiallyunderlying ODEs, which have reduced size and can be constructed numerically.Lyapunov exponents may be very sensitive under small changes in the system.The stability analysis for the Lyapunov exponents is discussed in detail in [31] (seealso [32]) As in the case of ODEs (but with some extra boundedness conditions) thestability can be characterized via the concept of integral separation and the stabilitycan be checked via the computation of Steklov differences.

Definition 2.9 A minimal fundamental solution matrix X for (2.1) is called

integrally separated if for i = 1, 2, , d − 1 there exist constants c1 > 0 and c2 > 0 such that

||X(t)e i ||

||X(s)e i || ·

||X(s)e i+1 ||

||X(t)e i+1 || ≥ c2e c1(t−s)for all t, s with t ≥ s ≥ 0 If a DAE system has an integrally separated minimal fundamental solution matrix, then we say that it has the integral separation property.

The integral separation property is invariant under strong global kinematic

equiv-alence transformations Furthermore, if a fundamental solution X of (2.1) is integrally separated, then so is the corresponding fundamental solution Z of (2.3) and vice versa.

2.3 Bohl exponents and Sacker–Sell spectrum Further concepts that are

important to describe the qualitative behavior of solutions to ordinary differentialequations are the exponential-dichotomy or Sacker–Sell spectra [38] and the Bohlexponents [7] (see also [15]) The extension of these concepts to DAEs has beenpresented in [31]

Definition 2.10 Let x be a nontrivial solution of (2.1) The upper Bohl

expo-nent κ u

B (x) of this solution is the greatest lower bound of all those values ρ for which

there exist constants N ρ > 0 such that

||x(t)|| ≤ N ρ e ρ(t−s) ||x(s)||

for any t ≥ s ≥ 0 If such numbers ρ do not exist, then one sets κ u B (x) = +∞.

Similarly, the lower Bohl exponent κ  B (x) is the least upper bound of all those values

ρ  for which there exist constants N ρ  > 0 such that

||x(t)|| ≥ N 

ρ e ρ  (t−s) ||x(s)|| , 0 ≤ s ≤ t.

Lyapunov exponents and Bohl exponents are related via

κ  B (x) ≤ λ  (x) ≤ λ u (x) ≤ κ u B (x).

Bohl exponents characterize the uniform growth rate of solutions, while Lyapunov

exponents simply characterize the growth rate of solutions departing from t = 0

and the formulas characterizing Bohl exponents for ODEs (see e.g [15]) immediatelyextend to DAEs, i.e.,

Moreover, unlike the Lyapunov exponents, the Bohl exponents are stable with respect

to admissible perturbations without the integral separation assumption; see [13, 31]

Trang 7

Definition 2.11 The DAE (2.1) is said to have exponential dichotomy if for any minimal fundamental solution X there exist a projection Π ∈ R d×d and positive constants K and α such that

||X(t)ΠX+(s)|| ≤ Ke −α(t−s) , t ≥ s,

||X(t)(I d − Π)X+(s)|| ≤ Ke α(t−s) , s > t, where X+ denotes the generalized Moore–Penrose inverse of X.

Let X be a fundamental solution matrix of 2.1, and let the columns of ˆ U form an

orthonormal basis of the solution subspace, then we have X = ˆ U Z, where Z is the

fundamental solution matrix of the corresponding EUODE (2.3) and hence invertible

Observing that X+= Z −1 UˆT, we have

cho-In order to extend the concept of exponential dichotomy spectrum to DAEs, we

need shifted DAE systems

where λ ∈ R By using the transformation as in Lemma 2.1, we obtain the

corre-sponding shifted EUODE for (2.8)

of (2.1) consists of at most d closed intervals.

Using the same arguments as in [31, section 3.4], one can show that under someboundedness conditions, the Sacker–Sell spectrum of the DAE (2.1) is stable with

respect to admissible perturbations Theorem 50 in [31] also states that if X is an

integrally separated fundamental matrix of (2.1), then the Sacker–Sell spectrum of the

system is exactly given by the d (not necessarily disjoint) Bohl intervals associated with the columns of X In the remainder of the paper, we assume that Σ S consists of

p ≤ d pairwise disjoint spectral intervals, i.e., Σ S =∪ p

i=1 [a i , b i ], and b i < a i+1 for all

1≤ i ≤ p This assumption can be easily achieved by combining possibly overlapping

spectral intervals to larger intervals

Trang 8

3 Leading directions and subspaces As we have noted before, initial

vec-tors of (2.1) must be chosen consistently and they form a d-dimensional subspace

in Rn Furthermore, the solutions of (2.1) also form a d-dimensional subspace of functions in C1(I, R n) Let us denote these spaces byS0 and S(t), respectively Fur- thermore, for x0 ∈ S0 let us denote by x(t; x0) the (unique) solution of (2.1) that

satisfies x(0; x0) = x0 In order to obtain geometrical information about the spaces of solutions which have a specific growth, we extend the analysis for ODEsgiven in [17] to DAEs

sub-For j = 1, d, define the set W j of all consistent initial conditions w such that the upper Lyapunov exponent of the solution x(t; w) of (2.1) satisfies χ u (x(·; w)) ≤ λ u j,i.e.,

W j= w ∈ S0: χ u (x(·; w)) ≤ λ u j

, j = 1, , d.

Let the columns of ˆU (·) form a smoothly varying basis of the solution subspace S(·)

of (2.1) and consider an associated EUODE (2.3) Then we can consider (2.3) and,

instead of W j, the corresponding set of all initial conditions for (2.3) that lead to

Lyapunov exponents not greater than λ u j In this way it is obvious that all ODEs in[17, Propositions 2.8–2.10] apply to EUODEs of the form (2.3) and, as a consequence

of Theorem 2.7, we obtain several analogous statements for (2.1) First, we state a

result on the subspaces W j

Proposition 3.1 Let d j be the largest number of linearly independent solutions

x of (2.1) such that lim sup t→∞ 1tln||x(t)|| = λ u

j Then W j is a d j dimensional linear subspace ofS0 Furthermore, the spaces W j , j = 1, 2, , form a filtration of S0, i.e.,

if p is the number of distinct upper Lyapunov exponents of the system, then we have

S0= W1 ⊃ W2⊃ · · · ⊃ W p ⊃ W p+1={0}

It follows that lim supt→∞1

tln||x(t; w)|| = λ u

j if and only if w ∈ W j \W j+1

More-over, if we have d distinct upper Lyapunov exponents, then the dimension of W j is

It follows that if we have p = d distinct Lyapunov exponents, then dim(Y j) = 1 for all

j = 1, , d In the next section, similar to [17, 18], we will approximate the spaces Y j

by using smooth singular value decompositions; see [10, 16], of fundamental solutions

If the DAE system (2.1) is integrally separated, then it can be shown that the

sets W j , Y j can be also used to characterize the set of initial solutions leading to lowerLyapunov exponents; see [17, Proposition 2.10] for details

Consider now the resolvent set ρ(E, A) For μ ∈ ρ(E, A), let us first define the

stable set associated with (2.1)

S μ =

w ∈ S0: limt→∞ e −μt ||x(t; w)|| = 0

Furthermore, for μ1 , μ2∈ ρ(E, A), μ1< μ2, we haveS μ1 ⊆ S μ2

Trang 9

In the following we study the EUODE (2.3) associated with (2.1) For simplicity,

we assume that Z is the principal matrix solution, i.e., Z(0) = I d This can always

be achieved by an appropriate kinematic equivalence transformation Following theconstruction for ODEs in [17], we characterize the sets

associated with (2.3) Recalling that p is the number of disjoint spectral intervals, let us now choose a set of values μ0 < μ1 < · · · < μ p , such that μ j ∈ ρ(E, A) and

ΣS ∩ (μ j−1 , μ j ) = [a j , b j ] for j = 1, , p In other words, we have

j is a linear subspace of dimension dim( N d

j) ≥ 1 with the following properties:

Let ˆU be an orthonormal basis of the solution subspace for (2.1) and introduce

This means thatN j is the subspace of initial conditions associated with solutions

of (2.1) whose upper and lower Lyapunov exponents are located inside [a j , b j].The next theorem characterizes the uniform exponential growth of the solutions

of (2.1)

Trang 10

Theorem 3.5 Consider the EUODE (2.3) associated with (2.1), and the sets N j

defined in (3.4), j = 1, , p Then w ∈ N j \ {0} if and only if

K j−1 e a j (t−s) ≤ ||x(t; w)||

||x(s; w)|| ≤ K j e b j (t−s) for all t ≥ s ≥ 0,

and some positive constants K j−1 , K j

Proof Due to the construction of the EUODE (2.3) (see Lemma 2.1) we have x(t; w) = U (t)Z(t)v, where v = ˆ U (0) T w, and thus ||x(t; w)|| = ||Z(t)v|| Theorem 3.9

and Remark 3.10 of [17] state that v ∈ N d

j if and only if1

K j−1 e a j (t−s) ≤ ||Z(s)v|| ||Z(t)v|| ≤ K j e b j (t−s) for all t ≥ s ≥ 0, and some positive constants K j−1 , K j Hence, the inequalities (3.5) follow immedi-ately

We can also characterize the relationship of the sets N j and the Bohl exponents.Corollary 3.6 Consider the EUODE (2.3) associated with (2.1) and the sets

N j defined in (3.4) Then for all j = 1, , p, w ∈ N j \ {0} if and only if a j ≤

κ  (x(·; w)) ≤ κ u (x(·; w)) ≤ b j , where κ  , κ u are the Bohl exponents.

Proof The proof follows from Theorem 3.5 and the definition of Bohl exponents;

To this end, take an arbitrary w ∈ ˆ U (0)S μ d j Then the corresponding initial value

for (2.3) defined by v = ˆ U (0) T w clearly belongs to S d

μ j and w = ˆ U (0)v holds By

considering the one-to-one relation between the solutions of (2.1) and those of itsassociated EUODE (2.3) and using that ||x(t; w)|| = ||Z(t)v||, then v ∈ S d

μ j implies

that w ∈ S μ j Conversely, take an arbitrary w ∈ S μ j Then there exists a unique

v ∈ R d which satisfies w = ˆ U (0)v Using again that ||x(t; w)|| = ||Z(t)v||, the claim

v ∈ S μ d j follows from the definition ofS μ j and that ofS d

μ j.(ii) As a consequence of Theorem 3.4 in [17], we haveS d

to DAEs

Trang 11

4 SVD-based methods for DAEs In this section we extend the approach in

[17, 18] of using smooth SVD factorizations for the computation of spectral intervals ofODEs to DAEs We assume again that the DAE system is given in the strangeness-free

form (2.1), i.e., whenever the value of E(t), A(t) is needed This has to be computed

from the derivative array as described in [27] This can be done, for example, withthe FORTRAN code GELDA [29] or the corresponding MATLAB version [30]

Let X be an (arbitrary) minimal fundamental matrix solution of (2.1); in ular, assume that X ∈ C1(I, R n×d ) Suppose that we are able to compute a smooth

partic-SVD

where U ∈ C1(I, R n×d ), V, Σ ∈ C1(I, R d×d ), U T (t)U (t) = V T (t)V (t) = I d for all

t ∈ I, and Σ(t) = diag(σ1(t), , σd (t)) is diagonal We assume that U, Σ, and V possess the same smoothness as X This holds, e.g., if X(t) is analytic (see [10] or if the singular values of X(t) are distinct for all t, see [16]) The explicit construction

of smooth SVDs is rather computationally expensive; see [10] and the proof of [27,Theorem 3.9]

Remark 4.1 Note that U in the smooth SVD of X and ˆ U , as in the construction

of the EUODE, play the same role, they form orthonormal bases of the correspondingsolution subspaceS, so we are in fact in the special case of the analysis in section 2

If we set Z = ΣV T, then this is a fundamental solution of the resulting EUODE of

the form (2.3) Furthermore, the factorization Z = ΣV T is the SVD of the specially

chosen fundamental solution Z.

We will also demonstrate how to modify the methods of [17, 18] to approximate

only a few (dominant) spectral intervals For this we need to select  ( < d) columns

of a fundamental solution, i.e.,  linearly independent solutions of the DAE (2.1), and

proceed to work with them There are essentially two approaches that can be extendedfrom ODEs to DAEs, a discrete and a continuous approach In the discrete SVD

method, the fundamental matrix solution X is indirectly evaluated by solving (2.1)

on subintervals and to reduce the accumulation of errors, the numerical integration

is performed with a reorthogonalization The discrete method is relatively easy toimplement but suffers from the fact that very small stepsizes have to be used andthat it needs a product SVD in each step which requires a lot of storage; see [18, 32].Due to the described disadvantages we will discuss only the continuous SVD approach;see [32] for a more detailed description and comparison of the two approaches

4.1 The continuous SVD method In the continuous SVD method one

de-rives differential-algebraic equations for the factors U, Σ, and V and solves the

cor-responding initial value problems via numerical integration If we differentiate the

expression for X in (4.1) with respect to t and substitute the result into (2.1), we

Trang 12

because the columns of U form an orthonormal basis of a subspace of the solution

subspace If we then differentiate (4.2) and insert this, we obtain

The latter two matrix functions are of size d × d (or  ×  in the reduced case).

We determine a matrix function P ∈ C(I, R n×d) with orthonormal columns, i.e.,

P T P = I d, such that

where E is nonsingular and upper triangular with positive diagonal entries Due to

[33, Lemma 12], this defines P, E uniquely The numerical computation of this pair

will be discussed later

The following property ofE is important in the proof of numerical stability for

analo-Denoting by cond(M ) the normwise condition number of a matrix M with respect

to inversion, as a consequence of Proposition 4.2, we have that the condE ≤ cond ¯ E,

and thus the sensitivity of the implicit EUODE (2.3) that we are using to computethe spectral intervals is not larger than that of the original DAE

Multiplying both sides of (4.3) with P T from the left, we obtain

Trang 13

which is almost the same differential equation as in the ODE case (see [17, 18]) there

is just a different formula for C = [c i,j ] Using the skew-symmetry of H = [h i,j ], K = [k i,j ] and that Σ = diag(σ1 , , σ d) is diagonal, we obtain the expressions

h i,j =c i,j σ j2+ c j,i σ i2

σ2j − σ2

i

, for i > j, and h i,j=−h j,i for i < j;

k i,j =(c i,j + c j,i )σ i σ j

is also linear and the same as that of (2.1) We will discuss the efficient integration

of this particular matrix DAE (4.9) below

To proceed further, we have to assume that the matrix function C in (4.5) is

uniformly bounded on I Furthermore, in order for the Lyapunov exponents to be

stable we will assume that the functions σ i are integrally separated, i.e., there exist

constants k1 > 0 and k2, 0 < k2≤ 1, such that

(4.10) σ j (t)

σ j (s)

σ j+1 (s)

σ j+1 (t) ≥ k2e k1(t−s), t ≥ s ≥ 0, j = 1, 2, , d − 1.

Condition (4.10) is equivalent to the integral separation of the diagonal of C.

The following results are then obtained as for ODEs in [17]

Proposition 4.3 Consider the differential equations (4.7) and (4.8) and suppose that the diagonal of C is integrally separated Then, the following statements hold.

(a) There exists ¯ t ∈ I, such that for all t ≥ ¯t, we have

Proof The proofs of (a), (b), and the convergence of V are given in [17, 20].

Further, one can show that the convergence rate of K is not worse than −k1, where

k1is the constant in (4.10); see [20, Lemma 7.3] Then, invoking the argument of [15,Lemma 2.4], we obtain

V (t) − ¯ V ≤et ∞ ||K(s)||ds − 1 ¯

V ,

... class="page_container" data-page="11">

4 SVD-based methods for DAEs In this section we extend the approach in

[17, 18] of using smooth SVD factorizations for the computation of spectral intervals ofODEs... or if the singular values of X(t) are distinct for all t, see [16]) The explicit construction

of smooth SVDs is rather computationally expensive; see [10] and the proof of [27,Theorem... appropriate kinematic equivalence transformation Following theconstruction for ODEs in [17], we characterize the sets

associated with (2.3) Recalling that p is the number of disjoint spectral

Ngày đăng: 12/12/2017, 07:53

TỪ KHÓA LIÊN QUAN