1. Trang chủ
  2. » Luận Văn - Báo Cáo

Convergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic Equations

98 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Convergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic Equations
Tác giả Trần Nhân Tâm Quyền
Người hướng dẫn Prof. Dr. Habil. Đinh Nho Hào
Trường học Vietnam Academy of Science and Technology
Chuyên ngành Mathematics
Thể loại dissertation
Năm xuất bản 2012
Thành phố Hanoi
Định dạng
Số trang 98
Dung lượng 4,91 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Convergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic EquationsConvergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic Equations

Trang 1

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

TRẦN NHÂN TÂM QUYỀN

Convergence Rates for the Tikhonov Regularization of Coefficient Identification Problems in Elliptic Equations

DISSERTATION SUBMITTED IN PARTIAL

DOCTOR OF PHILOSOPHY IN MATHEMATICS

HANOI–2012

Trang 2

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

TRẦN NHÂN TÂM QUYỀN

Convergence Rates for the Tikhonov

Regularization of Coefficient Identification Problems in Elliptic Equations

Speciality: Differential and Integral Equations

Speciality Code: 62 46 01 05

DOCTOR OF PHILOSOPHY IN MATHEMATICS

Trang 3

VIỆN KHOA HỌC VÀ CÔNG NGHỆ VIỆT NAM

VIỆN TOÁN HỌC

TRẦN NHÂN TÂM QUYỀN

TỐC ĐỘ HỘI TỤ CỦA PHƯƠNG PHÁP CHỈNH TIKHONOV CHO CÁC BÀI TOÁN XÁC

ĐỊNH HỆ SỐ TRONG PHƯƠNG TRÌNH ELLIPTIC

Chuyên ngành: Phương trình vi phân và tích phân

Mã số: 62 46 01 05

Dự thảoLUẬN ÁN TIẾN SĨ

Hà Nội–2012

Trang 4

I cannot find words sufficient to express my gratitude to my advisor, Profesor ĐinhNho Hào, who gave me the opportunity to work in the field of inverse and ill-posedproblems Furthermore, throughout the years that I have studied at the Institute ofMathematics, Vietnam Academy of Science and Technology he has introduced me toexciting mathemat- ical problems and stimulating topics within mathematics Thisdissertation would never have been completed without his guidance and endless support

I would like to thank Professors Hà Tiến Ngoạn, Nguyễn Minh Trí and Nguyễn ĐôngYên for their careful reading of the manuscript of my dissertation and for their constructivecomments and valuable suggestions

I would like to thank the Institute of Mathematics for providing me with such excellentworking conditions for my research

I am deeply indebted to the leaders of The University of Danang, Danang University

of Education and Department of Mathematics as well as to my colleagues, who haveprovided encouragement and financial support throughout my PhD studies

Last but not least, I wish to express my endless gratitude to my parents and also to

my brothers and sisters for their unconditional and unlimited love and support since I wasborn My special gratitude goes to my wife for her love and encouragement I dedicate thiswork as a spiritual gift to my children

Hà Nội, July 25, 2012

Trần Nhân Tâm Quyền

Trang 5

This work has been completed at Institute of Mathematics, Vietnam Academy of ence and Technology under the supervision of Prof Dr habil Đinh Nho Hào I declarehereby that the results presented in it are new and have never been published elsewhere

Sci-Author: Trần Nhân Tâm Quyền

Trang 6

Convergence Rates for the Tikhonov Regularization of

Coefficient Identification Problems in Elliptic Equations

By

TRẦN NHÂN TÂM QUYỀN

ABSTRAct

Let Ω be an open bounded connected domain in Rd, d 1, with the Lipschitz boundary

∂Ω, f L2(Ω) Ω) and g L2(Ω) ∂Ω) be given In this work we investigate convergence rates forthe Tikhonov regularization of the ill-posed nonlinear inverse problems of identifying thediffusion coefficient q in the Neumann problem for the elliptic equation

−div(Ω) q u∇u ) = f in Ω,

∂u

∂nand the reaction coefficient a in the Neumann problem for the elliptic equation

Tốc độ hội tụ của phương pháp chỉnh Tikhonov cho

các bài toán xác định hệ số trong phương trình elliptic

TÁc giả

TRẦN NHÂN TÂM QUYỀN

TÓM TắTGiả sử Ω là một miền liên thông, mở và bị chặn trong Rd, d 1, với biên Lipschitz ∂Ω

và các hàm f L2(Ω) Ω), g L2(Ω) ∂Ω) cho trước Luận án nghiên cứu các bài toán ngược phituyến đặt không chỉnh xác định hệ số truyền tải q trong bài toán Neumann cho phươngtrình elliptic

−div(Ω) q u∇u ) = f trong Ω,

u − z

∥u ∥u H1(Ω) Ω) ≤ δ Phương pháp chỉnh Tikhonov cho hai bài toán trên được áp dụng chocác phiến hàm năng lượng lồi Với điều kiện nguồn yếu không đòi hỏi tính đủ nhỏ của các hàm nguồn, ta thu được các đánh giá về tốc độ hội tụ của phương pháp chỉnh Tikhonov

Trang 7

0.1 Modelling 5

0.2 Inverse problems and ill-posedness 6

0.2.1 Inverse problems 6

0.2.2 Ill-posedness 8

0.3 Review of Methods 10

0.3.1 Integrating along characteristics 11

0.3.2 Finite di erencefference scheme 12

0.3.3 Output least-squares minimization 13

0.3.4 Equation error method 14

0.3.5 Modified equation error and least-squares method 15

0.3.6 Variational approach 16

0.3.7 Singular perturbation 18

0.3.8 Long-time behavior of an associated dynamical system 19

0.3.9 Regularization 20 0.4 Summary of the Dissertation 23

1 Problem setting and auxiliary results 28 1.1 Di usion coe cientfference fficient identification problem 28

1.1.1 Problem setting 28 1.1.2 Di erentiability of thefference coe cient-to-solutionfficient operator 29

1.1.3 Some preliminary results 31

1.2 Reaction coe cientfficient identification problem 35

1.2.1 Problem setting 35 1.2.2 Di erentiability of thefference coe cient-to-solutionfficient operator 36

1.2.3 Some preliminary results 37

1

Trang 8

1 2.1 Convergence rates for L2-regularization of the di usionfference coe cientfficient

identifi-cation problem 42

2.1.1 L2-regularization 42

2.1.2 Convergence rates 46

2.1.3 Discussion of the source condition 51

2.2 Convergence rates for L2-regularization of the reaction coe cientfficient identifica-tion problem 55

2.2.1 L2-regularization 55

2.2.2 Convergence rates 59

2.2.3 Discussion of the source condition 62

Conclusions 63

3 Total variation regularization 64 3.1 Convergence rates for total variation regularization of the di usionfference coe -fficient cient identification problem 64

3.1.1 Regularization by the total variation 64

3.1.2 Convergence rates 71

3.1.3 Discussion of the source condition 75

3.2 Convergence rates for total variation regularization of the reaction coe cientfficient identification problem 78

3.2.1 Regularization by the total variation 78

3.2.2 Convergence rates 84

3.2.3 Discussion of the source condition 87

Conclusions 89

4 Regularization of total variation combining with L2-stabilization 90 4.1 Convergence rates for total variation regularization combining with L2 -stabilization of the di usion coe cientfference fficient identification problem 90

4.1.1 Regularization by total variation combining with L2-stabilization 90 4.1.2 Convergence rates 97

4.1.3 Discussion of the source condition 99

4.2 Convergence rates for total variation regularization combining with L2 -stabilization of the reaction coe cientfficient identification problem 101

4.2.1 Regularization by the total variation combining with L2-stabilization 101 4.2.2 Convergence rates 105

4.2.3 Discussion of the source condition 108

Conclusions 110

Trang 9

General Conclusions 111 List of the author’s publications related to the dissertation 113

Trang 10

Function Spaces

Ω Open, bounded set with the Lipschitz boundary in Rd

C k(Ω)Ω) The set of k times continuously di erential functions on Ω, 1 fference ≤ k ≤ ∞

C∞ (Ω)Ω) The set of infinitely di erential fference functions on Ω

C k(Ω)Ω) The set of functions in C k(Ω)Ω) with compact support in Ω, 1 ≤ k ≤ ∞

L p(Ω)Ω) The Lebesgue space on Ω, 1 ≤ p ≤ ∞

W k,p(Ω)Ω) The Sobolev space of functions with k-th order weak derivatives in L p(Ω)Ω)

∥u u∥u X Norm of u in the normed space X

X∗ Dual space of the normed space X

⟨u, u⟩(X∗ ,X) Duality product u∗ (Ω)u) of u ∈ H X and u ∈ H X⟨u, v⟩ H Inner product of u, v in the

Hilbert space H

L(Ω)X, Y ) Space of bounded linear operators between normed spaces X and Y

T

∗ Adjoint in L(Ω)Y ∗, X∗ ) of T L ∈ H (Ω)X, Y )

∇uv Gradient of the scalar function v

v The Laplacian of the scalar function v

divΥ Divergence of the vector-valued function Υ

∂R(Ω)q) Subdi erential fference of the proper convex functional R at q ∈ H DomR, pp 21

D ξ (Ω)p, q) The Bregman distance with respect to R and ξ of two elements p, q, pp 21

q, a∗ A-priori estimates of the true coe cients, pp 42, fficient 55

q, aq∗ -, a∗ -solutions of the inverse problems, pp 42, 56, 66, 80, 91, 101

q δ , a Regularized solutions, pp 43, 56, 66, 79, 91, 101

ρ ρ

X The space L δ

∞ (Ω)Ω) ∩ BV (Ω)Ω) with the norm ∥u q∥u L∞(Ω) + ∥u q∥u BV(Ω) , pp 66

XBV(Ω) The space X with respect to the BV (Ω)Ω)-norm, pp 67

XL∞(Ω) The space X with respect to the L ∞ (Ω)Ω)-norm, pp 67

H1 (Ω)Ω) Space of functions in H1 (Ω)Ω) with mean-zero, pp 28

Trang 11

The problem of identifying parameters in distributed parameter systems arising ingroundwater hydrology, heat conduction, population models, seismic exploration andreser- voir simulation attracted great attention from many scientists in the last 50 years or

so For surveys on the subject, we refer the reader to [4, 14, 19, 33, 34, 35, 37, 38, 40,

45, 46,

47, 49, 55, 58, 59, 60, 62, 61, 64, 68, 72, 76, 77, 89, 90, 91, 92, 95, 96, 97, 98, 105, 106, 110,

112, 113, 118, 119, 123, 126, 127, 128, 129, 130] and the references therein The term

“dis- tributed parameter systems” means that the mathematical models in these situationsare governed by partial di erential equations In this thesis fference we are interesting in theproblem of identifying coe cients in groundwater fficient hydrology, whose mathematicalmodels contain function-coe cients which describe physical properties of the fluid flowsfficient

or of the porous media The identified coe cients appeared in the governing equations arefficientnot directly measurable from the physical point of view and have to be determined from

historical observations Such problems are called inverse problems which are in general

very di cult to solve because of the nonuniqueness and instability (Ω)the ill-posedness) offficientthe identified coe cients The aim of this thesis is to study convergence rates for thefficientTikhonov regu- larization of these ill-posed nonlinear problems Before presenting ourresults, to ease of reading we shortly describe the mathematical models on fluid flows andporous media

q(Ω)x)

∂u (Ω) x,t ) = g2(Ω)x, t), x ∈ H Γ2, t > 0,

Trang 12

5

Trang 13

1q(Ω)x) di usionfference coe cient,fficient

The steady case of (Ω)0.1)–(Ω)0.2) is

−div(Ω)q(Ω)x) u∇u ) + a(Ω)x)u = f(Ω)x), x ∈ H Ω ⊂ Rd, d ≥ 1 (Ω)0.3)accompanied by the boundary condition

q(Ω)x) ∂u (Ω) x ) = g2(Ω)x), x ∈ H Γ2.Physically, u can be interpreted as the piezometrical head of the ground water in Ω, thefunction f characterizes the sources and sinks in Ω and the function g2 characterizes theinflow and outflow through Γ2 (Ω)see, for example [124]) We say that this boundary valueproblem is of the mixed type, if neither Γ1 nor Γ2 is empty, of the Dirichlet type if Γ2 = ∅,and of the Neumann type if Γ1 = ∅

The steady system (Ω)0.3)–(Ω)0.4) contains three known functions f, g1 and g2 When thecoe cientsfficient q and a are given, the problem of uniquely solving u from the partial

di erential equation (Ω)0.3)–(Ω)0.4) is called the fference forward problem Conversely, the problem of

identifying the coe cientsfficient q and a from observed data of a solution u of (Ω)0.3)–(Ω)0.4) is

called the inverse problem.

In this work we are working with the inverse problems for the steady cases of (Ω)0.1)–(Ω)0.2) Namely, we are concerned with the Neumann problem for (Ω)0.3) We investigateconvergence rates for the Tikhonov regularization of the problems of identifying thecoe cient fficient q in the Neumann problem for the elliptic equation

∂ n

∂ t

Trang 14

and the coe cient fficient a in the Neumann problem for the elliptic equation

∂u

∂nfrom imprecise values zδ∈ H H1(Ω)Ω) of a solution u with

u − z

δ > 0 being given, while f and g are prescribed The problem of simultaneously estimatingthe coe cients fficient q and a in (Ω)0.3) with either the Neumann, or the Dirichlet or mixed bound-ary condition has been studied in [15, 62, 64, 83, 84, 86, 87] The functionals q and a in

these problems are called the di usion ffusion (Ω)or filtration or transmissivity, or conductivity) and

reaction coe cients, respectively For di erent kinds of the porous media, the di usionfficient fference fferencecoe cient varies in a large scale (Ω)see, for example, [123])fficient

a “norm” in the state space During the last four decades many techniques have beendeveloped for solving inverse problems of parameter identification in distributedparameter systems such as hydraulic parameters, boundary conditions, pollution sources,dispersivities, adsorption kinetics, and filtration and reaction coe cients.fficient

In 1973, Neuman [103] classed the techniques for solving inverse problems ofidentifying coe cients in distributed parameter systems into either “direct” or “indirect”.fficient

In the “di- rect method” the head variations and derivatives are assumed to be known orare estimated over the entire flow region and the original governing equations aretransformed into linear first–order partial di erentialfference equations of the hyperbolic type forthe unknown coe cients With the combined knowledge of the heads and initial andfficientboundary conditions, a direct solutions for the unknown coe cients may fficient be possible.However, in practice, there is only a limited number of observations being available whichare sparsely distributed in the flow region The interpolation is then used to extend thesedata across the spatial domain, this process may contain serious errors that would causeerrors in the results of identified coe cients The output least-squares techniques fficient havebeen used to the so–called “indirect methods”, which try to minimize a “norm” in the statespace of the di erence between observed and calculated heads at specified observationfferencepoints The main advantage of the output least-squares methods are that the formulation ofthe inverse problem is applicable to the situation where the number of observations islimited and it does not require the derivatives of the measured data However, thesemethods are several shortcomings First, the object functional to be minimized isnonlinear and nonconvex Second, the numerical

δ

Trang 15

implementation is often iterative that in order to obtain useful results a large number ofrepeated solution of the forward problem is required Further, since the minimization beingnonlinear and nonconvex, the third major shortcoming of these methods is that the criti-cal dependence on the initial guesses of identified coe cients for rapid convergence of thefficientiterative procedures With large, poorly–conditioned functionals, the convergence may beslow, pick out only a local minimum, or even fail (Ω)see also [14, 45, 96, 113, 118, 127,129]) One of the aims of this thesis is to overcome these serious shortcoming of the outputleast- squares method We will use the energy functionals which are convex, rather thanthe output least-squares (Ω)see more details in § 0.4).

Suppose that the coe cient fficient q in (Ω)0.5)–(Ω)0.6) is given so that we can determine theunique solution u and thus define a nonlinear coe cient-to-solution operator which mapsfficientfrom q to the solution u = u(Ω)q) := U(Ω)q) Thus, the inverse problem in our setting is tosolve the equation

U(Ω)q) = ufor q with u being given, which is the nonlinear and ill-posed inverse problem

In 1923, Hadamard [57] introduced the notion of well-posedness A problem is said to

be well posed if the following conditions are satisfied

1 Existence: There is a solution of the problem

2 Uniqueness: There is at most one solution of the problem

3 Stability: The solution continuously depends on the data (Ω)in some appropriatetopologies)

If at least one of the above conditions is not satisfied, the problem is said to be

ill-posed (Ω)or improperly ill-posed) He thought that such problems have no physical meaning.

However, many important problems in practice and science are ill-posed (Ω)see [13, 26, 68,

112, 123]), in which the instability always causes serious problems If a problem lacks thestability, a small error in observed data may lead to significant errors in the solution, thatmakes numerical solution extremely di cult.fficient

Now we illustrate the ill-posedness of the problem of identifying the coe cient fficient a in (Ω)0.7)–(Ω)0.8) from u by an example given by Baumeister and Kunisch [15] Let

a(Ω)x) = 2, u(Ω)x) = 1and

∥u n− u∥u H1(Ω)0,π)

n + 1 ,

Trang 16

Thus, the identification problem a in (Ω)0.7)–(Ω)0.8) is ill-posed in the L2(Ω)0, π) and L∞(Ω)0, π)- norms.

If we rewrite equation (Ω)0.5) as a first-order hyperbolic partial di erential equation in fferencethe unknown q, which leads to

q · u

It turns out that if u vanishes in subregions of Ω then it is impossible to determine q onthe these subregions This is one of the reasons why our coe cient identification problemfficient

is ill-posed (Ω)see, for example, [109]) This situation is possible when f ̸= 0, althoughAlessandrini [5] has shown that if f = 0 and if u|∂Ω has a finite number of relative maxima andminima then ∇u only vanishes at a finite number of points in Ω, with finite multiplicity.u

We note that in our setting we assume to have observations of zδ L2(Ω)Ω), zδ

(Ω)L2(Ω)Ω))d for the solution u and its gradient, respectively Such assumptions have been used by many authors, e.g., Acar [1], Banks and Kunisch [13], Chan and Tai [24, 25],

Chavent [26], Chavent and Kunisch [29], Chen and Zou [31], Ito and Kunisch [70, 71], Ito,

Kroller and Kunisch [69], Keung and Zou [79], Knowles et al [84]–[86], Kohn and Lowe

[88], Vainikko [121, 122], Vainikko and Kunisch [124], Zou [131] In practice, the

observation is measured at certain points and we need to interpolate the point observations

may not be measurable directly, but there are several ways to approximate it For

example, Chan and Tai [24, 25] suggested to use the di erentiation formulas fference by

Anderssen and Hegland [8], Knowles et al [86] applied first a mollification process and

then finite di erences,fference whereas Kaltenbacher and Schr¨oberl

[73] used a Cl´ement operator Πsm for smoothing the data and obtained the

gradient as a by-product (Ω)see also [32, pp 154–157]) Kaltenbacher and Schr¨oberl [73, pp 679–680]

showed that if ∥u u − zδ∥u L2 (Ω)Ω) ≤ ϵ and u ∈ H H2(Ω)Ω) ∩ W 1,∞

(Ω)Ω), then one can explicitly choo√sesmoothing parameters such that ∥u u − Πsmzδ∥u L2 (Ω)Ω) ≤ cϵ ϵ and u − ∥u Πsmzδ∥u H1 (Ω)Ω) ≤ cϵ ϵwith cϵ being the norm of Πsm from L2(Ω)Ω) to L2(Ω)Ω) The question is if u and u aregiven, whether our inverse problem is ill-posed? The case a is sought, the above explicitexample by Baumeister and Kunisch [15] showed this fact in the L2 and L∞ norms Wecould not find an explicit example showing that the problem of identifying q from theobservations of u and u is ill-posed in the same topologies However, the ill-posedness ofthis problem was discussed by Kohn and Lowe [88, p 123] Furthermore, Vainikko [122]showed that this problem is ill-posed, even if u cϵ > 0 in Ω Concerning this fact, seealso the dissertation of Cherlenyak [32, pp 147–154] where he gave a full proof of thisfact along some private communications with Vainikko Besides, Chavent and Kunisch[29, pp 432–434] (Ω)see also [26, p 28 and 4.9]) have also shown the ill-posedness of theproblem However, under certain additional assumption, the problem of identifying q in(Ω)0.5)–(Ω)0.6) is well-posed in the H−1-norm as measurement error in the H1-norm, where H−1

is the dual of H1 (Ω)see also [85]) In fact, assume that the boundary ∂Ω is of class C1 and u

W 2,∞(Ω)Ω) with u γ a.e on Ω, where γ is a positive constant Then, we can verify that if v is

Trang 17

q −

∥u p∥u H−1 (Ω)Ω) ≤ C u∥u − v∥u H1 (Ω)Ω) (Ω)0.12)

In fact, first we note for each element ξ ∈ H ∈ H H1(Ω)Ω), there exists ϑξ∈ H H1(Ω)Ω) satisfying

≤ p∥u ∥u L∞ (Ω)Ω)∥u ∇u(Ω)u − v)∥u L2 (Ω)Ω)∥u ∇uϑξ∥u L2 (Ω)Ω)

≤ C u − v∥u ∥u H1 (Ω)Ω)∥u ϑξ∥u H1 (Ω)Ω)

≤ C u − v∥u ∥u H1 (Ω)Ω)∥u ξ ∈ H∥u H1 (Ω)Ω)

for all ξ ∈ H H 1(Ω)Ω), where the positive constant C is independent of ξ ∈ H This leads toestimate (Ω)0.12) However, the well-posedness of the identification problem in this way isnot practicable since a good approximation in the H−1-norm is physically useless For theidentification problem, it is interesting in special identification methods that are well-posed

in the Lp-norm, 1 ≤ p ≤ ∞

Up to now, there have been many papers published devoted to the coe cient identi-fficientfication problems considered in this thesis Di erent techniques and methods fference have beenproposed for solving them such as output least squares methods [6, 26, 27, 29, 30, 36, 37,

46, 53, 99, 100], regularization methods [42, 54, 62, 60, 61, 59, 91, 102, 104, 116, 120], tion error methods [51, 76, 78], variational methods [88], integrating along characteristics[109], finite di erence schemes [110], singular perturbation technique [5], the augmentedfferenceLagrangian technique [31, 52, 69, 70, 72], the long-time behavior of an associateddynamical

equa-system [65], the level set method [25, 114, 115], and iterative methods [11, 12, 20, 74, 75,80] In the next section we will describe with more details of these approaches to ourinverse problems

We now discuss some of the techniques and methods that have been used for solvingcoe cient identification problems Compared to the problem of identifying fficient q in (Ω)0.5), theproblem of identifying a in (Ω)0.7) has received less attention However, there are someauthors studied the problem such as Alt [6], Banks and Kunisch [13], Baumeister and

∈ H

Trang 18

Kunisch [14], Chavent [26], Chavent, Kunisch and Roberts [30], Colonius and Kunisch[36, 37], Engl, Hanke and Neubauer [41], Engl, Kunisch and Neubauer [42], H`aoand Quyen [59, 60, 61, 62], Hein and Meyer [64], Ito, Kroller and Kunisch [69], Ito andKunisch

[70, 72], Knowles [82, 83, 84], Neubauer [101], and Resmerita and Scherzer [108] Thus,

in the following we will describe some approaches introduced in [5, 42, 46, 65, 76, 88, 105,

108, 109, 110] for solving the coe cient identification problem fficient q in (Ω)0.5)

In the article [109] Richter has written equation (Ω)0.5) as a first-order hyperbolic partial

di erential equation in the unknown fference q = q(Ω)x), which leads to

L(Ω)q, u) := q∇u (Ω)x) · u∇u (Ω)x) + q(Ω)x)∆u(Ω)x) = −f(Ω)x), x ∈ H Ω ⊂ R2 (Ω)0.15)

He assumes that

(Ω)H1) u C∈ H 2(Ω)Ω), f L∈ H ∞(Ω)Ω)

(Ω)H2)

This condition is equivalent to the one that the domain Ω can be divided into subregions

Ω1 and Ω2 in which | u| ∇u and ∆u are uniformly positive, respectively

continu-di erentialfference equation to which (Ω)0.15) reduces along such curves

Then, the author concluded that for any f, the equation (Ω)q, u) = f has a uniquesolution q = q(Ω)x) assuming prescribed values along the “inflow” boundary Γ ∂Ω(Ω)essen- tially that portion of ∂Ω where the outer normal derivative of u is negative)and

where

q

∥u ∥u L∞ (Ω)Ω) ≤ C(Ω)u)

k2

f

∥u ∥u L∞ (Ω)Ω )

Trang 19

we obtain the following continuous dependence

q

∥u − p ∥u ∞ ≤ C(Ω)u) (max {sup |q − p|, C^ } + [ u] C^) ,

C^ = ∥u ∇up∥u L∞ (Ω)Ω)∥u ∇u(Ω)u − v)∥u L∞ (Ω)Ω) + ∥u p∥u L∞ (Ω)Ω)∥u ∆(Ω)u − v)∥u L∞ (Ω)Ω) + ∥u f − g∥u L∞ (Ω)Ω)

The author in the article [110] investigates a finite di erence method for identifying offferencethe coe cient in equation (Ω)0.5) under condition (Ω)0.17) as long as fficient q(Ω)x) is prescribed alongthe inflow portion of the boundary ∂Ω of Ω For this scheme, equation (Ω)0.5) is viewed as afirst order hyperbolic partial di erential equation in the unknown fference q(Ω)x), which reduces to(Ω)0.15) First we describe the numerical method in [110] on the unit square (Ω)0, 1) (Ω)0, 1)

We define a uniform grid as follows

1(Ω)xi, yj) = (Ω)ih, jh), 0 ≤ i, j ≤ n + 1, h =

n + 1 .Denote by Ωh the set of interior grid points

h = {(Ω)xi, yj) | 1 ≤ i, j ≤ n}

and Γh the discrete inflow boundary (Ω)a grid point in ∂Ω is in Γh if its nearest

neighboring grid point in Ωh has a higher u value; e.g., (Ω)xi, y0) ∈ H Γh for i {∈ H 1, 2, , n}

if u(Ω)xi, y1) > u(Ω)xi, y0)) The grid values of q(Ω)x, y), u(Ω)x, y) and f(Ω)x, y) will be denoted by

= u i +1 ,j + u i − 1 ,j+ u i,j +1 + u i,j − 1 −

4

u ij

, h2

k is the first index of the minimum of {ui −1,j, uij, ui+1,j} and l is the second index

of the minimum of {ui,j−1, uij, ui,j+1}

Solving the equation Lh(Ω)qij, uij) = −fij for qij, we have

qkj (u ij −u kj

) + q (u ij − u il)

− hf

qij = u ij −u kj

+ u ij −u il + hHu .Under the assumption (Ω)0.16), Richter has showed that the discrete problem (Ω)0.18) has aunique solution qij assuming prescribed values on Γ h Further, if u and q are su cientlyfficientregular in Ω, u C∈ H 3(Ω)Ω) and q C∈ H 2(Ω)Ω), then

Trang 20

assuming qij = q(Ω)xi, yj) on Γh Finally, Richter has extended the applicability of this

di erencefference scheme to irregular domains and to problems in which the condition (Ω)0.16)does not hold but ∇u and ∆u do not simultaneously vanish anywhere in Ω.u

It seems that Frind and Pinder [48] were the first people who applied the output squares method to solve the problem of identifying the coe cient fficient q(Ω)x) in the Neumannproblem for the elliptic equation (Ω)0.5)–(Ω)0.6) The least-squares approach says that if u(Ω)p)

least-is the solution of (Ω)0.5)–(Ω)0.6), where the coe cient fficient q is replaced with p, then p is a goodapproximation of q if the di erencefference of a measurement z of u and u(Ω)p) is small in L2(Ω)Ω).For practical purposes, we need to define finite dimensional spaces to implement thisapproach Let ∆h 0<h<1 be a regular and quasi-uniform triangulation of Ω with triangles T ofdi- ameter less than or equal to h Given an L2-measurement z of u, select finitedimensional subspaces Ah and Vh To each coe cient fficient qh ∈ H Ah, we associate with a uh(Ω)qh) ∈ H

Vh, where uh(Ω)qh) solves (Ω)0.5)–(Ω)0.6) in a Galerkin approximation, i.e

∫ qh∇uuh(Ω)qh) · v∇u hdx = ∫ fvhdx + ∫ gvhdSfor all vh∈ H Vh, and

is the admissible set of coe cients and fficient q, q are given positive constants

Falk in [46] has presented a very interesting error estimate for the approximation scheme(Ω)Ph) To this end, we need to formulate some hypotheses

(Ω)H1) There are a constant unit vector ⃗ν and a constant σ > 0 such that ∇uu · ⃗ν ≥ σ forall x ∈ H Ω

(Ω)H4) The observation error is of the form

h

Trang 21

Then, for all h su ciently small, fficient we have

q − q

∥u h∥u L2 (Ω)Ω) ≤ C(Ω)hr + h−2ϵ),where qh is any solution of problem (Ω)Ph) and C is a positive constant independent of h and

ϵ Therefore, if z is the continuous piecewise polynomial interpolation of degree r + 1 of

u, then

u −

∥u z∥u L2 (Ω)Ω) = O(Ω)h ),

by the standard approximation result, and we have the estimate error

q − q

∥u h∥u L2 (Ω)Ω) = O(Ω)h )

In this method we replace the exact solution u by the measurement data z in (Ω)0.5).With z and f being given, we consider the mapping ψ(Ω)q) = (Ω)q z)+f and solve ψ(Ω)q) = 0for the “true” coe cient fficient q = q(Ω)x) by solving the problem

H

q∈ H Qad

where ad is the admissible set of coe cients and fficient H is an appropriately chosen Hilbertspace in which the boundary conditions on z can be incorporated into (Ω)0.19) Di erentlyfferencefrom the output least-squares method, (Ω)0.19) is convex and hence the existence of a uniqueglobal minimizer follows

Under an identifiability assumption the equation error method is realized with H =

L2(Ω)Ω) A multigrid algorithm is devised to solve the linear matrix equation which arisesfrom discretization of (Ω)0.19) and application of a necessary optimality condition

An alternative approach can be based on the weak formulation of (Ω)0.19) In the case

of homogeneous Dirichlet boundary conditions z|∂Ω = 0, it is given by

min ∥u ∇u · (Ω)q z∇u ) + f∥u 2

where H−1(Ω)Ω) is the dual space of H1(Ω)Ω) Note that (Ω)0.20) is equivalent to

min ∥u ∆−1(Ω) · ∇u (Ω)q z∇u ) + f)∥u 2 1 ,

g on ∂Ω and its numerical treatment for smooth as well as for discontinuous coe cients fficient q

is given in [1]

r r+2

Trang 22

0.3.5 Modified equation error and least-squares method

K¨arkk¨ainen has introduced this method in [76] Consider the problem of identifying the coe cientfficient q in the homogeneous elliptic boundary value problem

−div(Ω)q u∇u ) = f in Ω,

∂u

∂n = 0 on Γ1,where Ω is a bounded domain in Rd, d ≥ 1, with smooth boundary ∂Ω = Γ¯0 ∪ Γ¯1 and

Γ0, Γ1 are relative open disjoint subsets of ∂Ω The main idea of this method is toinclude an extra term to the least squares cost functional which takes into account theunderlying equation (Ω)0.21), multipling a weight chosen according to the finitedimensional spaces, to balance the di erentfference amount of di erentiationfference on both terms.This approach combines the output least-squares method with the equation errormethod to transform the identification problem into a minimization problem

Let ∆h 0<h<1 be a triangulation of Ω with triangles T of diameter less than or equal to

h If the boundary of ∂Ω is curved, we use triangles with one edge replaced by the curvedsegment of the boundary Further, it is assumed that the family ∆h 0<h<1 is regular andquasi-uniform Given an L2-measurement z of u, select finite dimensional subspaces Ah and

Vh for the coe cient and the state, respectively fficient To each coe cient fficient qh ∈ H Ah, we associatewith a uh(Ω)qh) ∈ H Vh, where uh(Ω)qh) solves (Ω)0.21) in a Galerkin approximation, i.e

∫ qh∇uuh(Ω)qh) · v∇u hdx = ∫ fvhdxfor all vh∈ H Vh⊂ H1(Ω)Ω ∪ Γ1) = {v ∈ H H1(Ω)Ω) | v| = 0} This approach to the approximatedetermination of q is to solve the following problem: Find qh∈ H Ah such that

Trang 23

with r ≥ d .

Trang 24

(Ω)H3) We choose

r h,l = {v ∈ H Cl−1(Ω)Ω) | v| ∈ H Pr, T ∀n ∈ ∈ H ∆h}with Pr being the space of polynomials of degree less than or equal to r We denote

Sr,0 = Sr ∩ H1(Ω)Ω ∪ Γ1), the subspace of S of functions vanishing on Γ0

Then, for h small enough and the regularization parameter ρg = hr

b

|qh− q||u′|dx ≤ C(Ω)h

r+1

+

h−1ϵ),where C is a positive constant independent of h

We present here another numerical scheme for the reconstruction of the coe cient fficient q in(Ω)0.5)–(Ω)0.6) This approach was developed in [88] and is motivated by the simpleobservation that for any positive weights γ 1 and γ 2

∫ | − q u|σ ∇u 2dx + γ 1 ∫ |div σ + f|2dx + γ 2 ∫ | · n − σ g| ≥ 0, (Ω)0.22)

for any choice of q and any vector field σ, and the minimum being achieved only when

σ = q u with q and u satisfying (Ω)0.5)–(Ω)0.6) This variational method for reconstructing theunknown coe cientfficient involves minimizing (Ω)0.22) numerically over suitable finite-dimensional spaces of coe cientsfficient and vector fields, using the measured data um, f m, and

gm

Let ∆h 0<h<1 be a family of regular, quasi-uniform triangulations of Ω, an openbounded connected domain with a Lipschitz boundary If the boundary of ∂Ω iscurved, we use triangles with one edge replaced by the curved segment of theboundary Given measurements um, f m, and gm of u, f, and g, respectively, and selectfinite dimensional subspaces Ah, Kh for the coe cientfficient and the vector field variables.The variational method to identify the unknown coe cientfficient q in (Ω)0.5)–(Ω)0.6) involvesminimizing the functional

Trang 26

extremely easy to implement The disadvantage of this method is the large number ofvariables it uses, for example if σ and q are piecewise linear on a triangulation with N 2nodes, then the functional to be minimized depends on 3N 2 variables.

Variations of (Ω)0.23) are possible For instance, one might consider using σ · n = gm

and div σ = −f m as constraints and minimizing ∥u σ − q · u∇u m∥u 2

2 or perhaps ∥u σ − q ·

m 2

L (Ω)Ω) + ϵ∥u ∥u q 2

1

for some small positive ϵ

Now we make some assumptions

(Ω)H1) Let q and u satisfy equations (Ω)0.5)–(Ω)0.6) and let them have the following regular- ities: q H∈ H 2(Ω)Ω), u H∈ H 3(Ω)Ω), and ∆u C∈ H (Ω)Ω)

(Ω)H2) Set Q(Ω)k) = {w C∈ H (Ω)Ω) | w| P∈ H k, T ∀n ∈ ∈ H ∆h} with Pk being the set of polynomials

of degree less than or equal to k and

Ah = {w Q∈ H (Ω)1) | 0 < q ≤ w ≤ q},

Kh = Q(Ω)1) h × Q(Ω)1)h,where q and q are positive constants

(Ω)H3) Let um, f m, and gm be measurements of u, f and g correspondingly, where

m∈ H Q(Ω)k) for some fixed k Then, the authors in [88] conclude that:

Q Let (Ω)q p,h, σp,h) ∈ H Ah × Kh solve the problem

∥u L2(Ω)∂Ω) + ρg∥u ∇u ∥u q L2 (Ω)Ω)},

Trang 27

+ h−1∥u σ · n − gm∥u 2

2+ ρg∥u ∇u ∥u Lq 2 (Ω)Ω)},

L (Ω)∂Ω)

Trang 28

where ρg ∼ (Ω)h + ϵ + λ1 + h−1/2λ2)2 Then, if u C∈ H 2(Ω)Ω) and | u| ∇u ̸= 0 on Ω, one has

q

∥u p,h − q∥u L2 (Ω)Ω) ≤ C(Ω)h + ϵ + λ1 + h−1/2λ2)1/2.Further, if f m = f(Ω)1), gm = g(Ω)1) and (Ω)qp,h, σp,h) ∈ H Ah× Kh solve

1/2

q

∥u p,h− q∥u L2 (Ω)Ω) ≤ Ch All the positive constants C in these error estimates are independent of h, ϵ, λ1 and λ2

In the article [5] Alessandrini has proposed a singular perturbation technique to mine the spatially varying coe cient in the special case fficient f = 0 in the partial di erentialfferenceequation (Ω)0.5) when the Dirichlet boundary condition is u = g on ∂Ω, where Ω is a con-nected, C2-smooth, bounded domain in R2 and g is a smooth function which is preciselyknown Moreover, it is assumed that q satisfies the ellipticity condition

deter-0 < q ≤ q(Ω)x) ≤ q, x ∈ H Ω along with the following regularity hypothesis

| q| ∇u ≤ E, x ∈ H Ω, where q, q and E are fixed positive constants

Alessandrini has proved that if g has a finite number N of relative maxima and ima on ∂Ω, then the gradient of u vanishes only at a finite number of interior points,and only with a finite multiplicity Moreover, the number of interior critical points andtheir multiplicities are controlled in terms of N Alessandrini’s algorithm consists of anapproximation procedure It has been shown that as ϵ 0, the solution qϵ of the elliptic

min-boundary value problem

ϵ∆qϵ + div(Ω)q u∇u ) = 0 in Ω,

converges to q in Lp for every 1 ≤ p < ∞ Hence an approximate identification

is performed solving the problem (Ω)0.24) with a suitably chosen value of ϵ It is worthmentioning that under a very smooth hypothesis for q, qϵ, u, g, Ω and the boundary values

q|∂Ω of q the following estimate holds

Trang 29

0.3.8 Long-time behavior of an associated dynamical system

Ho mann and Sprekels in [65] fference have proposed a new and ingenious technique to struct coe cients in elliptic equations An algorithm is developed to identify the unknownfficientcoe cients without a minimization technique This method is based on the construction offficientcertain time-dependent problems which contain the original equation as asymptotic steadystate The specific equation they considered is

where Ω is an open and bounded set in Rd, u∗ ∈ H H1(Ω)Ω) and f∗ ∈ H H−1(Ω)Ω) Thealgorithm seeks to determine a pointwise symmetric matrix function A∗ ∈ H L∞(Ω)Ω) forsolving (Ω)0.25) Here A∗ ∈ H L∞(Ω)Ω) means that a∗

ij ∈ H L∞(Ω)Ω) for all entries of A∗.The main idea of this method is to regard (Ω)0.25) as an asymptotic for t steadystate of the following system of parabolic equations

The system (Ω)0.27) has a unique solution (Ω)u(Ω)t), A(Ω)t)) for all t They show that for eachsequence tn→ ∞, there exists a subsequence (Ω)tk n ) such that A(Ω)tk n ) → A∞ weakly in L2(Ω)Ω)where A∞ satisfies (Ω)0.25), under the hypothesis that (Ω)0.25) has at least one positive definitesolution A∗ ∈ H L∞(Ω)Ω)

The key tool in this result is the a-priori estimate:

sup{∥u ∇u (Ω)t) − uu ∇u ∗∥u 2

2+ ∥u (Ω)t) − A∗∥u A 2

2 }

t≥0

+ ∫ ∞ ∥u ∇u (Ω)t) − uu

0

where C = C(Ω)u0, A0, A∗) is a positive constant For practical purposes, it is necessary

to replace the system (Ω)0.27) by a finite-dimensional scheme To this end, a Galerkinapproximation is proposed Under additional assumptions, it can be shown that the a-priori estimation (Ω)0.28) holds in the finite dimensional case This estimate is used again

L2 (Ω)Ω)

Trang 30

to show that if An(Ω)t) is the n-dimensional Galerkin solution of the system (Ω)0.27), thenlimt→∞ An(Ω)t) = An ∈ H L∞(Ω)Ω) and An → A∞ weakly in L2(Ω)Ω), where A∞ satisfies (Ω)0.25).

It is worth noticing that this method gives a matrix coe cient, not a scalar one Besides,fficient

in this context the solution of (Ω)0.25) is not unique The method of [65] presumably chooses

a particular solution, but it is not clear which one

The problems of identifying the coe cient fficient q in (Ω)0.5)–(Ω)0.6) and the coe cient fficient a in(Ω)0.7)–(Ω)0.8) are well known to be ill-posed, and there have been several stable methods forsolving them such as numerical methods presented in subsections 0.3.1.–0.3.8 (Ω)see also[13, 51, 56, 69, 73, 79, 87, 104, 125, 128, 131]) and regularization methods [11, 12, 20,58,

74, 75, 80, 122, 123] Among these stable solving methods, the Tikhonov regularizationseems to be most popular Therefore, we will explain this technique applied to our inverseproblems in more details

For solving the nonlinear ill-posed inverse problems of identifying the coe cient fficient q in(Ω)0.5)–(Ω)0.6) and the coe cientfficient a in (Ω)0.7)–(Ω)0.8), Engl, Kunisch and Neubauer in [42]consider the general nonlinear ill-posed equation

Hilbert spaces and , and is some admissible set of the coe cients The output least-fficientsquares method with the Tikhonov regularization is then formulated as follows

min ∥u (Ω)q) − zU δ∥u 2 + ρg∥u − qq ∗∥u 2 (Ω)0.30)

Here, ρg > 0 is a regularization parameter, q∗ is an a-priori estimate of the true coe cient,fficientand zδ is the observed data of the exact data u with a measurement error of level δ > 0,that is, ∥u − zu δ∥u U ≤ δ

They assume that:

(Ω)A1) There exists a solution qδ of the problem (Ω)0.30).

(Ω)A2) The mapping U is Fr´echet di erefference ntiable and its Fr´echet derivative U′ is Lipschitz continuous with Lipschitz constant L

(Ω)A3) There exists a source element w U ∈ H such that

q† − q∗ = U′(Ω)q†)∗wand the “small enough condition” holds

where q† is a q∗-minimum-norm solution of equation (Ω)0.29) defined by

U(Ω)q†) = u and q∥u † − q∗∥u H = min{ q − q∥u ∗∥u H | U(Ω)q) = u}

Then, they conclude that the regularized minimizers qδ of the problem (Ω)0.30) converge to

q† with the rate δ1/2 as δ → 0 and ρg ∼ δ Namely,

Trang 31

However, in their work these authors could apply their theory only to the one-dimensionalcases of the above coe cient identification problems with the requirement fficient H3-regularityand H2-regularity on sought coe cientsfficient q and a in (Ω)0.5)–(Ω)0.6) and (Ω)0.7)–(Ω)0.8),respectively.

To find more features (Ω)e.g points of discontinuity) of the sought coe cients, recentlyfficientmany authors considered the nonlinear ill-posed problems (Ω)0.29) in Banach spaces, espe-cially in the space of functions with bounded total variation This approach is promising.However, it is di cult,fficient since the Tikhonov functionals are not di erentiablefference andregulariza- tion theory in Banach spaces is not so developed To summary some ideas inthis direction, we introduce the notion of the Bregman distance Let L be a Banach spacewith L∗being its dual space Suppose that R : L → (Ω)−∞, +∞] is a proper convex functionaland ∂R(Ω)q) stands for the subdi erential of fference R at q ∈ H DomR := {q ∈ H L | R(Ω)q) < +∞} ̸= ∅defined by

∂R(Ω)q) := {q∗ L∈ H ∗| R(Ω)p) ≥ R(Ω)q) + q⟨ ∗, p − q⟩(Ω)L ∗,L) for all p ∈ H L}

The set ∂R(Ω)q) may be empty; however, if R is continuous at q, then it is nonempty Further,

∂R(Ω)q) is convex and weak* compact (Ω)see, [39], Propositions 5.1, 5.2, p 21–22) In case

∂R(Ω)q) ̸= ∅, for any fixed p L ∈ H we denote by

DR(Ω)p, q) := {R(Ω)p) − R(Ω)q) + q⟨ ∗, p − q⟩(Ω)L ∗,L) | q∗ ∂R∈ H (Ω)q)}

Then, for a fixed element q∗ ∂R∈ H (Ω)q),

Dq∗ (Ω)p, q) := R(Ω)p) − R(Ω)q) + q⟨ ∗, p − q⟩(Ω)L ∗,L)

is called the Bregman distance with respect to R and q of two elements p, q L∈ H

In general, the Bregman distance is not a metric on L However, for each q∗ ∈ H ∂R(Ω)q)the Dq∗ (Ω)p, q) ≥ 0 for any p L ∈ H and Dq∗ (Ω)q, q) = 0 Further, in case R is a strictly convexfunction, Dq∗ (Ω)p, q) = 0 if and only if p = q The notion of Bregman distance was firstgiven by Bregman [18] for Fr´echet di erefference ntiable R and it was generalized by Kiwiel [81]

to nonsmooth but strictly convex R Burger and Osher [21] further generalized this notionfor R being neither smooth nor strictly convex

Resmerita and Scherzer in [108] considered the general nonlinear ill-posed equation(Ω)0.29), where U : D(Ω)U) L → F ⊂ is a nonlinear mapping between the Banach spaces Land F, and D(Ω)U) is the domain of the operator U The output least-squares method withregularization by the proper convex functional R(Ω)·) is then formulated as follows

min 1∥u (Ω)q) − z ∥u + RU δ 2 ρg (Ω)q) (Ω)0.32)

q ∈ H (Ω)U ) D 2 FHere, ρg 0 is the regularization parameter and z > δ is the observed data of the

exact one uwith ∥u − zu δ∥u F ≤ δ, δ > 0

They assume that:

(Ω)H1) There exists an R-minimizing solution q† of equation (Ω)0.29) defined by

U(Ω)q†) = u and R(Ω)q†) = min{R(Ω)q) | U(Ω)q) = u}

(Ω)H2) There exists a solution qδ of problem (Ω)0.32).

(Ω)H3) The mapping U is Gˆateaux di erefference ntiable and there exists a positive constant γ such that for any q D∈ H (Ω)U) ∩ Br(Ω)q†)

Trang 32

for all ξ ∈ H ∈ H ∂R(Ω)q†), where Br(Ω)q†) := {q L | q − q∈ H ∥u †∥u L < r, r > 0}

(Ω)H4) There exists a source element w F∈ H ∗such that

ξ ∈ H := U′(Ω)q†)∗w ∂R∈ H (Ω)q†) and the “small enough condition” holds

w

Then, they conclude that the minimizers qδ of problem (Ω)0.32) converge to q† with the rate

δ as → δ 0 and ρg ∼ δ in the sense of the Bregman distance, i.e.,

Dξ (Ω)qδ, q†) = O(Ω)δ) as → δ 0 and ρg ∼ δ

To apply Resmerita and Scherzer’s theory to identification problems, we briefly duce the notion of the space of functions with bounded total variation; for more details,the reader may consult Ambrosio, Fusco, and Pallara [7], Attouch, Buttazzo and Michaille[9], Evans and Gariepy [44], and Guisti [50] A function q L1(Ω)Ω) is said to be of boundedtotal variation if

BV (Ω)Ω) = {q ∈ H L1(Ω)Ω) ∫

It is the Banach space under the norm

| q| < ∇u ∞}

q

∥u ∥u BV (Ω)Ω) := ∥u ∥u q L1 (Ω)Ω) + ∫ | q| ∇u

Further, if Ω is an open bounded set in Rd, d 1 with Lipschitz boundary, then W 1,1(Ω)Ω) Ç

BV (Ω)Ω) (Ω)Giusti [50], pp 3–4)

In their work these authors could apply their theory only to the problem of identifyingthe coe cient fficient a in (Ω)1.3)–(Ω)1.4) They take = BV (Ω)Ω), the space of functions with boundedtotal variation, and

R(Ω)·) = 1 ∥u ∥u · 2

with Ω (Ω) ) being the total variation defined However, in this particular case the smallenough condition (Ω)0.34) has been uncontrollable

In this work, by our new approach based on convex energy functionals (Ω)see more

details in 0.4), we will see that our source conditions are easy to check and much weakerthan that by Engl, Kunisch and Neubauer [42] and Resmerita and Scherzer [108], since we

remove the so-called small enough condition (Ω)0.31) and (Ω)0.34) on the source functions

which is popularized and very hard to check in the theory of regularization of nonlinear posed problems (Ω)see [41, 42, 108])

Trang 33

0.4 Summary of the Dissertation

In this dissertation we investigate convergence rates for the Tikhonov regularization

of the problems of identifying the coe cient fficient q in the Neumann problem for the ellipticequation (Ω)0.5)–(Ω)0.6) and the coe cient fficient a in the Neumann problem for the elliptic equation(Ω)0.7)–(Ω)0.8) as the error level δ in (Ω)0.9) tends to zero

Although there have been many papers devoted to the subject, there have been veryfew ones devoted to the convergence rates for the methods Earlier, there was only thepaper by Engl, Kunisch and Neubauer [42] devoted to convergence rates for the Tikhonovregularization of the above mentioned problems Recently, as a generalization of [42],Resmerita and Scherzer [108] investigated convergence rates for convex variational regu-larization These authors use the output least-squares method with the Tikhonov reg-ularization of the nonlinear ill-posed problems and obtain some convergence rates under

certain source conditions However, working with nonconvex functions, they are faced

with di culties in finding the global minimizers fficient Further, their source conditions are hard

to check and require high regularity of the sought coe cient fficient To overcome the ings of the above mentioned works, in this thesis we do not use the output least-squares

shortcom-method but follow Knowles [82, 83, 87] and Zou [131] in using the convex energy

func-tionals (Ω)see (Ω)0.37) and (Ω)0.38)) and then applying the Tikhonov regularization to these

convex energy functionals We obtain the convergence rates for three forms of tion (Ω)L2-regularization, total variation regularization and regularization of total variationcombining with L2-stabilization) of the inverse problems of identifying q in (Ω)0.5)–(Ω)0.6) and

regulariza-a in (Ω)0.7)–(Ω)0.8) Our source conditions regulariza-are simple regulariza-and much weregulariza-aker thregulariza-an thregulariza-at by Engl,Kunisch and Neubauer [42] and Resmerita and Scherzer [108], since we remove the smallenough condition (Ω)0.31) and (Ω)0.34) on the source functions in the theory of regularizationfor nonlinear ill-posed problems which is rather restrictive Furthermore, our results areapplicable to multi-dimensional identification problems The crucial and new idea in thedissertation is that we use the convex energy functional

q → Jz δ

(Ω)q) := 1

2 q|∇u(Ω)U(Ω)q) − zδ)|2dx, q Q∈ H ad (Ω)0.37)for identifying q in (Ω)0.5)–(Ω)0.6) and the convex energy functional

least-Tikhonov regularization to these functionals and obtain convergence rates of the method.The content of this dissertation is presented in four chapters In Chapter 1, we willstate the inverse problems of identifying the coe cientfficient q in (Ω)0.5)–(Ω)0.6) and a in (Ω)0.7)–(Ω)0.8), and prove auxiliary results used in Chapters 2–4

In Chapter 2, we apply L2-regularization to these convex energy functionals and tigate convergence rates of the method Namely, for identifying q in (Ω)0.5)–(Ω)0.6) we

inves-consider the strictly convex minimization problem

Trang 34

q ∈ H Q ad L (Ω)Ω)

Trang 35

and for identifying a in (Ω)0.7)–(Ω)0.8) the strictly convex minimization problem

min Gzδ (Ω)a) + ρg∥u − aa∗∥u 2

where ρg > 0 is the regularization parameter, q∗ and a∗ respectively are a-priori estimates ofsought coe cients fficient q and a Although these cost functions appear more complicated thanthat of the output least squares method, it is in fact much simpler because of its strictlyconvexity, so there is no question on the uniqueness and localiz√ation of the minimizer.We

will exploit this nice property to obtain convergence rates O(Ω) δ), as δ → 0 and ρg ∼ δ,under simple and weak source conditions Our main convergence results in Chapter 2 can now be stated as follows

Let q† be the q∗-minimum norm solution of the coe cient identification problem fficient q in (Ω)0.5)–(Ω)0.6) (Ω)see § 2.1.1.) and qδ be a solution of problem (Ω)0.39) Assume that there exists

a functional w∗ H∈ H 1(Ω)Ω)∗ (Ω)see § 1.1.2 for the definition of H1(Ω)Ω)) such that

Similarly, let a† be the a∗-minimum norm solution of the coe cientfficient identification lem a in (Ω)0.7)–(Ω)0.8) (Ω)see § 2.2.1.) and aδ be a solution of problem (Ω)0.40) Assume that

prob-there exists a functional w∗ H∈ H 1(Ω)Ω)∗ such that

Then,

a

∥u δ − a†∥u L2 (Ω)Ω) = O(Ω)

δ) and U∥u (Ω)aρ) − z ∥u H1 (Ω)Ω) = O(Ω)δ)

as δ 0 and ρg δ Thus, in our source conditions the requirement on the smallness of the source functions is removed

We note that (Ω)see Theorem 2.2.7) the source√condition (Ω)0.42) is fulfilled for the arbitrarydimension d and hence a convergence rate O(Ω) δ) of L2-regularization is obtained underhypothesis that the sought coe cient fficient a† is an element of H1(Ω)Ω) and U(Ω)a†) γ a.e on

Ω, where γ is a positive constant

To estimate a possible discontinuous or highly oscillating coe cient fficient q, some authorsused the output least-squares method with total variation regularization (Ω)see, e.g [2, 11,

22, 24, 25, 125] and part 2 of 0.3.9.) Namely, they treated the nonconvex optimization

Trang 36

U(Ω)q) − zδ)2

dx + ρg ∫ | q|∇u

(Ω)0.43)

Trang 37

with Ω | q| ∇u being the total variation of the function q defined by (Ω)0.35).

Total variation regularization originally introduced in image denoising by Rudin,Osher and Fatemi [111] has been used in several ill-posed inverse problems and analyzed

by many authors over the last decades This method is of particular interest for problemswith possibility of discontinuity or high oscillation in the solution (Ω)see, for example, [2,

10, 23, 25, 28, 31, 56, 79, 115] and the references therein) Although there have been manypapers using total variation regularization of ill-posed problems, there are very few onesdevoted to the convergence rates Only recently, Burger and Osher [21] investigated theconvergence rates for convex variational regularization of linear ill-posed problems in thesense of the Bregman distance This seminal paper has been intensively developed forseveral linear and nonlinear ill-posed problems [66, 107, 108], etc

We remark that the cost function appeared in (Ω)0.43) is not convex, it is di cult tofficientfind global minimizers To overcome this shortcoming, in Chapter 3, we do not use theoutput least-squares method, but apply the total variation regularization method to energyfunctionals Jz δ (Ω) ) and Gz δ (Ω) ), and obtain convergence rates for this approach Namely, foridentifying q, we consider the convex minimization problem

min Jz δ (Ω)q) + ρg

Similarly, let a† be a total variation-minimizing solution of the problem of identifying

a in (Ω)0.7)–(Ω)0.8) (Ω)see § 3.2.1.) and aδ be a solution of problem (Ω)0.45) Assume that there

exists a functional w∗ H∈ H 1(Ω)Ω)∗ such that

U′(Ω)a†)∗w∗ ∂ ∈ H (∫ |∇u(Ω)·)|) (Ω)a

Trang 38

add an additional L2-stabilization to the convex energy functionals (Ω)0.44) and (Ω)0.45) forrespectively identifying q and a, and obtain convergence rates not only in the sense of theBregman distance but also in the L2(Ω)Ω)-norm Namely, for identifying q in (Ω)0.5)–(Ω)0.6), we

consider the strictly convex minimization problem

certain linear ill-posed problem.

Denote by qδ the solution of (Ω)0.48), q† the R-minimizing norm solution of the problem

of identifying q in (Ω)0.5)–(Ω)0.6), where R(Ω)·) is defined by (Ω)0.36) Assume that there exists afunctional w∗ H∈ H 1(Ω)Ω)∗ such that

U′(Ω)q†)∗w∗ = q† + ℓ ∈ H ∂R(Ω)q†) (Ω)0.50)for some element ℓ in ∂ (∫Ω |∇u(Ω)·)|) (Ω)q†) Then, we have the convergence rates

Thus, our source conditions are much weaker than that by Engl, Kunisch and Neubauer[42] and Resmerita and Scherzer [108]

Some authors have analyzed the convergence rates for the Tikhonov regularization

of nonlinear ill-posed problems without the smallness conditions For example, in [101]Neubauer considered the nonlinear ill-posed problem (Ω)0.29), where U : (Ω)U) is

a nonlinear mapping between the Hilbert spaces and , and (Ω)U) is the domain of theoperator U He approached the output least-squares method with the Tikhonov regulariza-tion and used Hilbert scales to remove the smallness conditions However, Neubauer couldonly apply his theory to the one-dimensional case of the coe cient identification problemfficient

a in (Ω)0.7)–(Ω)0.8) with requiring a in H6(Ω)Ω), some restrictions on the exact solution and

Trang 39

some unpleasant boundary conditions Further, how to solve his nonconvex minimization

Trang 40

problem in Hilbert scales is not clear On the other hand, in a very interesting series ofpapers [17, 63, 67], Hofmann and coworkers investigated the interplay of sourceconditions and structural nonlinearity conditions in establishing convergence rates for theTikhonov regularization of the nonlinear ill-posed problem (Ω)0.29) with U : (Ω)U) being anonlinear mapping between the Banach spaces and In these papers, they replacedthe condition (Ω)0.33) by the following stronger ones

zδ

F and thus arrived at the Tikhonov functional

ψ(Ω) U∥u (Ω)q) − zδ∥u F ) + Rρg (Ω)q) (Ω)0.53)with a misfit function ψ which is nonconvex Our approach is completely di erent.fference It

is not aiming at minimizing the discrepancy U(Ω)q) zδ

F , but an energy functional

which is convex Therefore, the theory of Hofmann et al is in a di erent frameworkfferenceand not comparable with our approach to the inverse problems considered in thiswork

In the whole thesis we assume that Ω is an open bounded connected domain in Rd, d 1with the Lipschitz boundary ∂Ω The functions f L2(Ω)Ω) in (Ω)0.5) or (Ω)0.7) and g L2(Ω)∂Ω)

in (Ω)0.6) or (Ω)0.8) are given The notation U is referred to the nonlinear coe cient-to-fficientsolution operators for the Neumann problems We use the standard notion of Sobolevspaces H1(Ω)Ω), H1(Ω)Ω), W 1,∞(Ω)Ω) and W 2,∞(Ω)Ω) etc from the books [3, 43, 93, 117] For the

Ngày đăng: 20/07/2023, 14:15

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w