1. Trang chủ
  2. » Thể loại khác

Hot topics in linear algebra

309 2 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 309
Dung lượng 3,89 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

TRENDS IN FIELD THEORY RESEARCH MATHEMATICS RESEARCH DEVELOPMENTS HOT TOPICS IN LINEAR ALGEBRA No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any fo.

Trang 3

H OT T OPICS IN

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or

by any means The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services

Trang 4

M ATHEMATICS R ESEARCH

Additional books and e-books in this series can be found on Nova’s website

under the Series tab

Trang 5

HOT TOPICS IN

Trang 6

All rights reserved No part of this book may be reproduced, stored in a retrieval system or transmitted

in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher

We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication Simply navigate to this publication’s page on Nova’s website and locate the “Get Permission” button below the title description This button is linked directly to the title’s permission page on copyright.com Alternatively, you can visit copyright.com and search by title, ISBN, or ISSN

For further questions about using the service on copyright.com, please contact:

Copyright Clearance Center

Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: info@copyright.com

NOTICE TO THE READER

The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material Any parts of this book based on government reports are so indicated and copyright is claimed for those parts

to the extent applicable to compilations of such works

Independent verification should be sought for any data, advice or recommendations contained in this book In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication

This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services If legal or any other expert assistance is required, the services of a competent person should be sought FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS

Additional color graphics may be available in the e-book version of this book

Library of Congress Cataloging-in-Publication Data

Names: Kyrchei, Ivan I., editor

Title: Hot topics in linear algebra / Ivan Kyrchei, editor

Identifiers: LCCN 2020015306 (print) | LCCN 2020015307 (ebook) | ISBN

9781536177701 (hardcover) | ISBN 9781536177718 (adobe pdf)

Subjects: LCSH: Algebras, Linear

Classification: LCC QA184.2 H68 2020 (print) | LCC QA184.2 (ebook) | DDC

512/.5 dc23

LC record available at https://lccn.loc.gov/2020015306

LC ebook record available at https://lccn.loc.gov/2020015307

Published by Nova Science Publishers, Inc † New York

Trang 7

C ONTENTS

Gradient-Based Dynamical Systems

1

Predrag S Stanimirović and Yimin Wei

Ivan I Kyrchei

Bisymmetric Solutions of General Coupled Matrix

Equations

111

Masoud Hajarian

Quaternion Matrix Equations

137

Abdur Rehman, Ivan I Kyrchei, Muhammad Akram,

Ilyas Ali and Abdul Shakoor

Applications

163

Taras Goy and Roman Zatorsky

Volodymyr M Prokip

Trang 8

Chapter 7 Matrices in Chemical Problems Modeled Using

Directed Graphs and Multigraphs

Marta G Caligaris, Georgina B Rodríguez

and Lorena F Laugero

Trang 9

Linear algebra is the branch of mathematics concerning vector spaces andlinear mappings between such spaces Systems of linear equations with severalunknowns are naturally represented using the formalism of matrices and vec-tors So we arrive at the matrix algebra, etc Linear algebra is central to almostall areas of mathematics Many ideas and methods of linear algebra were gen-eralized to abstract algebra Functional analysis studies the infinite-dimensionalversion of the theory of vector spaces Combined with calculus, linear algebrafacilitates the solution of linear systems of differential equations Linear algebra

is also used in most sciences and engineering areas, because it allows modelingmany natural phenomena, and efficiently computing with such models

”Hot Topics in Linear Algebra” presents original studies in some areas ofthe leading edge of linear algebra Each article has been carefully selected in

an attempt to present substantial research results across a broad spectrum ics discussed herein include recent advances in analysis of various dynamicalsystems based on the Gradient Neural Network; Cramer’s rules for quaterniongeneralized Sylvester-type matrix equations; matrix algorithms for finding thegeneralized bisymmetric solution pair of general coupled Sylvester-type ma-trix equations; explicit solution formulas of some systems of mixed general-ized Sylvester-type quaternion matrix equations; new approaches to studyingthe properties of Hessenberg matrices by using triangular tables and their func-tions; researching of polynomial matrices over a field with respect to semi-scalarequivalence; mathematical modeling problems in chemistry with applying mix-

Trang 10

Top-ing problems which the associated MP-matrices; some visual apps, designed inScilab, for the learning of different topics of Linear Algebra.

In Chapter 1, the dynamical systems and recurrent neural networks a apply

as a powerful tool for solving many kinds of matrix algebra problems In ticular, for computing generalized inverse matrices RNN models, that are dedi-cated to find zeros of equations or to minimize nonlinear functions and representoptimization networks, are used Convergence properties and exact solutions ofconsidered models are investigated in this section as well

par-In the following three chapters, matrix equations as one of more famoussubjects of linear algebra are studying

The well-known Cramer’s rule is an elegant formula for the solutions of asystem of linear equations that has both theoretical and practical importances It

is the consequence of the unique determinantal representation of inverse matrix

by the adjoint matrix with the cofactors in the entries Is it possible to solve byCramer’s rule the generalized Sylvester matrix equation

moreover when this equation has quaternionic coefficient matrices? Chapter 2gives the answer on this question In this chapter, Cramer’s rules for Eq (1)and for the quaternionic generalized Sylvester matrix equations with ∗- and η-Hermicities are derived within the framework of the theory of noncommutativecolumn-row determinants previously introduced by the author Algorithms offinding solutions are obtained in both cases with complex and quaternionic co-efficient matrices

In Chapter 3, the Hestenes-Stiefel (HS) version of biconjugate residual

of the general coupled matrix equations

Trang 11

to have a solution are derived in Chapter 4 The solution pair (X, Y ) is pressed in terms of Moore-Penrose inverses and its determinantal representa-tions by noncommutative column-row determinants are used in an example.

ex-In Chapter 5, new approaches to studying the properties of Hessenberg trices and an effective algorithms for calculating the determinants and perma-nents of such matrices are considered The theory of new subjects of linearalgebra such as triangular tables and their functions – paradeterminants andparapermanents, which are some analogs of the determinant and permanent,are used in this chapter

ma-Polynomial matrices over a field with respect to semi-scalar equivalenceare studying in Chapter 6 The necessary and sufficient conditions of semi-

characteristic zero are given in terms of solutions of a homogenous system oflinear equations, and canonical forms are obtained with respect to semi-scalar

In Chapter 7, mathematical modeling problems in chemistry, namely, ing problems (MPs) are explored These problems lead to linear ordinary dif-ferential equations (ODE) systems, for which the associated matrices (so-calledMP-matrices) have different structures depending on the internal geometry ofthe system Useful tools to characterize MPs geometrical properties are graphstheory In particular, directed graphs and multigraphs are widely utilized inthis chapter for that purpose The main objective of this chapter consists inanalyzing MP-matrices, focusing on their algebraic properties, which involveeigenvectors, eigenvalues and their algebraic and geometric multiplicities.The main objective of Chapter 8 is to present some visual apps, designed

mix-in Scilab, for the learnmix-ing of different topics of Lmix-inear Algebra These apps arefar-reaching resources that give new didactical and pedagogical possibilities

Trang 13

Editor: Ivan I Kyrchei c

Chapter 1

C OMPUTING G ENERALIZED I NVERSES

U SING G RADIENT -B ASED D YNAMICAL

S YSTEMS

Predrag S Stanimirovi´c1,∗and Yimin Wei2,†

1University of Niˇs, Faculty of Sciences and Mathematics,

Niˇs, Serbia

2Fudan University, Shanghai, P R China

Abstract The present chapter is a survey and further theoretical and computa- tional analysis of various dynamical systems based on the Gradient Neural Network (GNN) evolution design for solving matrix equations and com- puting generalized inverses For that purpose, different types of dynamic state equations corresponding to various outer and inner inverses are con- sidered In addition, some dynamical systems arising from GNN models have been proposed and used in computing generalized inverses Con- vergence properties and exact solutions of considered models are investi- gated Simulation results are obtained using Matlab Simulink implemen- tation and using Matlab programs Also, an algorithm for generating the exact solution of some dynamic state equations is stated Implementation

of that algorithm in Computer Algebra System (CAS) Mathematica gives

Trang 14

an efficient software for symbolic computation of outer inverses of ces The domain of the Mathematica program includes constant matrices whose entries are integers, rational numbers as well as one-variable or multiple-variable rational or polynomial matrices Illustrative examples are presented using a symbolic implementation in the package Mathe- matica.

matri-Keywords: Gradient Neural Network (GNN), dynamical system, dynamic stateequation, convergence, computer algebra

1 INTRODUCTION

rank(A) and σ(A) stand for the conjugate transpose, the range and the null

Simi-larly, R[X ] (resp R(X )) denotes the polynomials (resp rational functions) with

The problem of generalized inverses computation leads to, so called, rose equations

fulfills the matrix equation (2) in conjunction with

Trang 15

The right inverse ofA ∈ Rm×nn will be denoted byA−1R = ATA−1

appro-priate dimensions For other important properties of generalized inverses see[1, 2, 3]

There are three general approaches in computing generalized inverses

1 Classical numerical algorithms, defined as a complete set of proceduresfor finding an approximate solution of a problem, together with com-putable error estimates Numerical algorithms can be divided in twocategories: direct and iterative methods The singular value decomposi-tion (SVD) algorithm is the most known between the direct methods [2].Also, other types of matrix factorizations have been exploited in compu-tation of generalized inverses, such as the QR decomposition [4, 5], LUfactorization [6] Methods based on the application of the Gauss-Jordanelimination process to an appropriate augmented matrix were investigated

in [7, 8] Algorithms for computing the inverse of a constant

Trang 16

were presented in [9, 10] A more general finite algorithm for computingthe Moore-Penrose generalized inverses of a given rectangular or singu-

the finite algorithm for computing the Drazin inverse were introduced in

estab-lished in [15] Greville’s Partitioning method, originated in [16], has beenvery popular last years

Iterative methods, such as the orthogonal projection algorithms, the ton iterative algorithm, and the higher-order convergent iterative methodsare suitable for implementation The iterative methods, such as the or-thogonal projection algorithms, the Newton iterative algorithm, and thehigher-order convergent iterative methods are more suitable for imple-mentation The Newton iterative algorithm has a fast convergence rate,but it requires an initial condition for its convergence All iterative meth-ods, in general, require initial conditions which are ultimate, rigorous andsometimes cannot be fulfilled easily A number of iterative methods wereproposed in [8, 17, 18, 19, 20, 21, 22, 23, 24] and many other references.The Newton iterative algorithm has a fast convergence rate, but it requires

New-an initial condition for its convergence All iterative methods, in general,require initial conditions which are ultimate, rigorous and sometimes can-not be fulfilled easily

2 Computer algebra, also called symbolic computation or algebraic putation, is a part of computational mathematics that refers to algo-rithms and software for manipulating mathematical expressions and

algo-rithms was given in [25] More details can be found in the references[26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]

3 Continuous-time recurrent neural network (RNN) algorithms, based ondynamical systems

A dynamical system is a system in which the movement of some points in

a geometrical space is defined by a time-dependent function The dynamicalsystems and recurrent neural networks are a powerful tool for solving manykinds of matrix algebra problems because of:

(a) their parallel distributed nature;

(b) possibility to ensure a response within a predefined time-frame in real-time

Trang 17

(c) convenience of hardware implementation

We consider RNN models dedicated to finding zeros of equations or to imize nonlinear functions These models represent optimization networks Op-timization RNN models can be divided into two classes: Gradient Neural Net-works (GNN) and Zhang Neural Networks (ZNN) GNN models are explicit andaimed to solving time-invariant problems On the other hands, ZNN models can

min-be implicit and able to solve time-varying problems

Recently, a number of nonlinear and linear continuous-time dynamical tems and initiated recurrent neural network models have been developed forthe purpose of numerical evaluation of the matrix inverse and the pseudoin-verse of full-row or full-column rank rectangular matrices (for more details, see[45, 46, 47]) Also, various recurrent neural networks for computing general-ized inverses of rank-deficient matrices were designed in [48, 49] The most

investigated in [50] In [51] the authors proposed conditions for the existence

framework for computing these generalized inverses was proposed tional algorithms [51] are defined using GNN models for solving matrix equa-tions It is known that the multiplication of the right hand side of the classicalZNN design by an appropriate positive definite matrix generates a new neuraldesign with improved convergence rate The goal in [52] is to apply similar prin-ciples on the GNN design Appropriate combinations of GNN and ZNN models

were developed in [53] Two gradient-based recurrent neural networks for puting the W-weighted Drazin inverse of a real constant matrix were presented

com-in [54]

The global organization of sections is as follows Various dynamical tems for solving matrix equations are investigated in Section 2 Section 3 isdevoted to GNN models for computing generalized inverses RNN models aris-ing from GNN models are considered in Section 4 Convergence properties andexact solutions of considered models are investigated in Section 5 Section 6investigates symbolic computation of outer inverses based on finding exact so-lutions of underlying dynamic state equations Illustrative simulation examplesare presented in Section 7

Trang 18

sys-2 GNN DYNAMICS FOR SOLVING MATRIX

EQUATIONS

The dynamics of the GNN models for solving a matrix equation is defined on

matrix in the considered matrix equation by the time-varying activation state

function

A and Tr(·) denotes the trace of a matrix The general design formula is fined as the dynamical system with the evolution along the negative gradient

of a capacitance parameter, and could be chosen as large as possible in order

in-creasing function array, element-wise applicable to elements of a real matrix

Remark 2.1 The following real-valued monotonically increasing odd functionsare widely used

Trang 19

Smooth power-sigmoid function

use-ful to consider a solution to this matrix equation and its particular appearances.Clearly, so far considered GNN models can be defined upon one of the appear-

2.1 GNN for Solving the Matrix EquationAXB = D

Most general gradient-based neural dynamics is aimed to solving the general

investigated in [50] The model is based on the matrix-valued error functionE(t) = D − AV (t)B Consequently, the scalar-valued goal function is ε(t) =

∂ε(V (t))

12

Using the general evolution design (2.4), the nonlinear GNN design for solving

dV (t)

Remark 2.2 It is important to emphasize the difference between

as follows:

˙

Trang 20

The generalized GNN model (GGNN model) is applicable in both varying and time-invariant case and it can operate with time-varying coefficient

t → +∞, for an arbitrary initial state matrix V (0)

transforms the dynamics (2.9) into the equivalent form

Trang 21

Evidently, the inequalityL(V (t), t) ≥ 0 holds for V (t) 6= 0 According to

with basic properties of the matrix trace function, one can express the time

increas-ing, Generalizing the strategy from [56], one can verify the following:



< 0 if W (t) 6= 0,

This further implies:

- dL(V (t),t)dt < 0 at any non-equilibrium state V (t) satisfying W (t) = AV (t)B−

D 6= 0;

-dL(V (t),t)dt = 0 at the equilibrium state V (t) satisfying W (t) = AV (t)B−D = 0

It is worth to mention that the constraints (2.17) are not caused by the GNNevolution design This condition is just the general condition for solvability

Theorem 2.2

Trang 22

Theorem 2.2 [50] Assume that the real matricesA ∈ Rm×n,B ∈ Rp×q and

According to the basic properties of the Moore-Penrose inverse, it follows that

Trang 23

According to Theorem 2.2, the limiting value ˜V of V (t) is determined by

Corollary 2.1 follows immediately from (2.18), taking into account (2.17).Also, it is directly implied by Theorem 2.1

(b) Solution (2.18) to the GNN model (2.9) coincides with the general solution

(c) Also, it is important to mention two important details:

Trang 24

3 GNN MODELS FOR COMPUTING GENERALIZED

INVERSES

It is known that representations of generalized inverses are closely related tosolving of appropriate matrix equations In this section, we exploit this possi-bility, and derive dynamical systems for solving main generalized inverses

3.1 GNN for Regular Inverse

Wang in [47] proposed the dynamic equation of the linear recurrent neural

E(t) = AV (t) − I In this case, the objective scalar-valued function is defined

proven in [47] that the GNN model (3.1) is asymptotically stable in the largeand the steady-state matrix of the recurrent neural network is equal to the inverse

3.2 GNN for Computing the Moore-Penrose Inverse

Recurrent neural network defined in (3.1) can be used for computing the right

dV (t)

Trang 25

can be used in approximating the left inverseA† = A−1L = AT AAT−1

Theclosed-form solution to the state matrices from (3.1) and (3.2) can be described

0

The global exponential convergence of Gradient neural network (3.2) in the

verified in [60]

3.3 GNN for Computing the Weighted Moore-Penrose Inverse

Wei in [49] introduced the following dynamic state equation of the first

Trang 26

inverse of a rank–deficient matrix:

dV (t)

a real large scalar value [49]

Corresponding integral representation of the weighted Moore–Penrose verse of a linear operator between Hilbert spaces was introduced in [61]:

0

3.4 GNN for Computing Outer Inverses

dynamic state equation and later for the induced gradient based recurrent neuralnetwork (GNN), is given in Lemma 3.1

matrix equations

are satisfied

E(t) = AV (t)B − D On the other hand, the GN N (GA, I, G) model is

Trang 27

whereV (t) ∈ Rn×m denotes the unknown time-varying matrix to be solved.

the dynamic-system approach in conjunction with the symbolic data processing.For this purpose, the generally adopted rule is to use one of the following twodual scalar-valued error functions, defined as the residual Frobenius norm

Corollary 3.1 arise from Theorem 2.2

˜

Trang 28

uT Matrix

Multiply

Matrix Multiply GAV(t)

γ

-K-1/s

-1 Constant

Time Scope

Interpreted MATLAB Fcn Frobenius Norm

Display1 GAV(t)-G

V(t)

˜

solution is given by (3.14)

matrices was investigated [50]

GN N (I, AG, G), defined by (3.13), is convergent as t → +∞ and has thelimit value

˜

Trang 29

(ii) In particular, forVAG(0) = O, it follows that

˜

4 RNN MODELS ARISING FROM GNN MODELS

(resp (3.13)), and considered two dual linear RNN models, defined as follows:

dV L (t)

The dynamical evolution (4.17) will be termed as GNNATS2

Practically, it is possible to consider two RNN models for computing outer

GN N AT S2R ≡ RN N (GA, I, G) and defined by

RN N (I, AG, G) and defined by

The application of the dynamic evolution design (4.17) assumes that the real

σ(GA) ⊂ {z : Re (z) ≥ 0}, m ≥ n,

contains negative values Clearly, the model (4.17) is simpler than the models(3.12) and (3.13), but it loses global stability Two approaches can be used togenerate the solution in the case when (4.20) is not satisfied:

Trang 30

RN N (GA, I, G) dynamics.

The recurrent neural network defined above is a linear dynamic system in

matrix form According to the linear systems theory [63], the closed-form

solu-tion of the state matrix can be described as follows:

To analyze the convergence and stability of a neural network, it is of major

lim

t→∞e−γGAt= O (4.22)Now, (4.22) in conjunction with (4.21) imply the following result for

Trang 31

Theorem 4.1 [62] LetA ∈ Rm×n be given matrix,G ∈ Rn×ms be arbitrary

A(2)R(G),N (G), i.e.,

VGA = A(2)R(G),N (G) (4.25)Remark 4.1 Analogous statement can be analogously verified when the outer

According to Theorem 4.1, the application of the dynamic equation (4.18)

AG More precisely, the first RNN approach used in [62] fails in the case when

Re (σ(GA)) contains negative values

The neural network used in our implementation is composed from a number

of independent subnetworks, in the similar way as it has already been discussed

each subnetwork and defined as

−γAG, m < n

Particularly, a simplification of the GNN model for computing the Drazin

method to resolve the limitation (4.27) was proposed in [64], and it is based on

Trang 32

There are several cases, depending on eigenvalues of A, for selecting the

nonneg-ative real parts These cases are discussed in Theorem 4.2 Before the mainresults, we present several supporting facts in Lemma 4.1 and some notations

π

|ϕj| < k <

4s + 12

π

ind(A) is globally stable and the steady-state matrix of the recurrent neural

lim

in the following cases:

k + 1 is even

Trang 33

Case 3 The spectrum ofA satisfies

5 FURTHER RESULTS ON CONVERGENCE

PROPERTIES

Before a fixed (or equilibrium) point can be analysed, it is desirable to mine it This initiates importance of some classical computer algebra problems(such as an exact solution to differential or algebraic equations) in the study ofdynamical systems Exact solutions of some dynamical systems are investigated

deter-in [65, 66]

The following result from [65, Appendix B.2] will be useful

Proposition 5.1 The matrix differential equation

0 < s ≤ r and ind(GA) = ind(AG) = 1 Assume that the spectrum σ(GA) =

Trang 34

(b) If the initial approximation is the zero matrix: V (0) = O, then the exact

of the GNNATS2R evolution (4.18) is equal to

Trang 35

(b) The zero initial stateV (0) = O vanishes the term e−γt GAVR(0), and theproof is implied by the part (a).

(c) It suffices to use known fact from [62], where the authors shown thatlim

t→∞e−γtGA= O if the matrix GA has nonnegative real parts of eigenvalues inconjunction with equation (5.2)

following statements are valid

= e−γt(GA)TGAVGA(0) +I − e−γt (GA)TGAA(2,4)R(GA)T ,N (G)

(5.10)

Trang 36

(b) The exact solution to theGN N (GA, I, G) dynamics (3.12) with the zero

Proof (a) Using known results from [63], the closed-form solution of the state

e−γt(GA)TGAVGA(0) + γe−γt (GA)TGA

0

eγ(GA)TGAτ(GA)TG dτ (5.13)



= e−γt(GA)TGAVGA(0) +he−γt (GA)TGAeγt(GA)TGA−Ii(GA)†G

= e−γt(GA)TGAVGA(0) + I − e−γtGA (GA)†G

Trang 37

Now, the proof of this part can be easily completed using A(2,4)R(GA)T ,N (G) =

proof follows from the part (a)

lim

t→∞e−γt(GA)TGA= O, the proof follows from (5.11)

following statements hold

= e−γtAG(AG)TVAG(0) +I − e−γt AG(AG)TA(2,3)R(G),N (AG)T

vector-ization of the system of matrix differential equations into the vector form (mass

Trang 38

matrix) and then solving the vector of differential equations by means of one of

appli-cation of one of these solvers in unpredictable time instants inside the predefined

Now, we can define exactly our “symbolic solver” or “symbolic Simulink”.Namely, they are defined as one of the symbolic matrices defined in (5.2) or

in (5.11) or (5.15) The exact outer inverses can be generated by the limiting

sup-The main idea is to solve the matrix differential equations in symbolic form.The solution given in symbolic form should be generated only once Afterthat, that symbolic expression derived as the symbolic solution to the system

GNNATS2 dynamics can be used as “symbolic solver” which is able to define

the possibility to investigate some limiting properties of the symbolic solver by

According to the previous discussion, we present the corresponding

finding exact solutions of the dynamic state equation (4.17) or (3.12) The

numbers or as well as one-variable or multiple-variable rational expressions,

Trang 39

Algorithm 6.1 Computing outer inverse of a given matrixA ∈ R(X )m×nr

s ≤ r

sym-bolic form

GN N (GA, I, G) dynamics

Step 1:4: Define the symbolic matrix equation eqnstate, defined as

˙

Step 3:2: Solve the system of differential equations eqns with respect to

The implementation of Algorithm 6.1 is performed by the following code

in the algebraic programming language Mathematica [68] Below we give thecode which solves the problem (3.12) Let us mention that the Mathematica

exact solution in Step 5 of Algorithm 6.1

Trang 40

SymNNInv[A_, G_] := Module[

{dimsA,V,derV,E,eqnState,eqnInit,eqns,vars,ret,prodGA,eigs}, (*Compute the matrix product GA*)

ret = DSolve[eqns, vars, t] // Simplify;

ret = vars / Sort[Flatten[ret]];

(*Return the outer inverse*)

Return[Table[ ret[[(i-1)*dimsA[[1]]+j]],

{i,dimsA[[2]]}, {j,dimsA[[1]]} ]];

];

eqnEvol = -\[Gamma]*Transpose[prodGA].(prodGA.V-G);

prodAG = A.G;

eqnEvol = -\[Gamma]*(V.prodAG-G).Transpose[prodAG];

Ngày đăng: 12/09/2022, 10:40

w