1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo khoa học: "D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn" pps

12 462 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 12
Dung lượng 244,76 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Limiting Subgradients of the Marginal Functionin Some Pathological Smooth Programming Problems Thai Doan Chuonga Abstract.. In this paper we show that the results of Mordukhovich, Nam an

Trang 1

Limiting Subgradients of the Marginal Function

in Some Pathological Smooth Programming Problems

Thai Doan Chuong(a)

Abstract In this paper we show that the results of Mordukhovich, Nam and Yen [6] on differential stability in parametric programming can be used to derive up-per estimates for the limiting subgradients of the marginal function in some patho-logical smooth programming problems proposed by Gauvin and Dubeau [2].

1. Introduction

Letϕ : X × Y → Rbe a function taking values in the extended real lineR := [−∞, ∞],

G : X ⇒ Y a set-valued mapping between Banach spaces Consider the parametric programming problem

minimizeϕ(x, y) subject to y ∈ G(x).

(1.1)

The extended-real-valued function

µ(x) := inf{ϕ(x, y) | y ∈ G(x)}

(1.2)

is said to be the marginal function (or the value function) of (1.1) The solution map

M (·)of the problem is defined by

M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)}.

(1.3)

For (1.1), we say thatϕis the objective function andGis the constraint mapping Continuity and differentiability properties ofµin the case whereX = R n,Y = R m,

ϕis a smooth function (i.e., a C1-function) and G(x) is the set of all xsatisfying the parametric inequality/equality system

g i (x, y) 6 0, i = 1, , p; h j (x, y) = 0, j = 1, , q; (1.4)

g i : X × Y → R (i = 1, , p) and h j : X × Y → R (j = 1, , q) are smooth functions, were studied firstly by Gauvin and Tolle [3], Gauvin and Dubeau [2] Their results and ideas have been extended and applied by many authors; see Mordukhovich, Nam and Yen [6], where the caseϕis a nonsmooth function andGis an arbitrary set-valued map between Banach spaces is investigated, and the references therein

We will show that the results of [6] on differential stability in parametric program-ming can be used to estimate the set of limiting subgradients (i.e the limiting subd-ifferential) of the marginal function in six ``pathological" smooth programming prob-lems proposed by Gauvin and Dubeau [2] Thus, general results on differentiability

NhËn bµi ngµy 30/3/2007 Söa ch÷a xong ngµy 23/5/2007.

Trang 2

properties of the marginal function of (1.1) are very useful even for the classical finite-dimensional smooth setting of the problem We also consider several illustrative ex-amples for the results of [6] Unlike the corresponding exex-amples in that paper, all the problems considered herein are smooth

The emphasis in [1] [3] was made on the Clarke subgradients ofµ, while the main concern of [6] is about the Frechet and the limiting subgradients of µ The reader

is referred to [4, 5] for interesting comments on the development of the concepts of subgradients just mentioned Note that, under very mild assumptions onXandϕ, the convex hull of the limiting subdifferential ofϕ at a given pointx ∈ X coincides with the Clarke subdifferential of ϕ at the point So, the limiting subdifferential can be considered as the (nonconvex) core of the corresponding Clarke subdifferential Thus upper estimates for the limiting subdifferential of marginal functions can lead to sharp upper estimates for the Clarke subdifferential

2. Preliminaries

Let us recall some material on generalized differentiation, which is available in [4, 5] All the spaces considered are Banach, unless otherwise stated

Definition 2.1 Letϕ : X → Rbe an extended-real-valued function which is finite at

x Given anyε ≥ 0, we say that a vectorx ∗ from the topological dual spaceX ∗ ofX is

anε − subgradientofϕatxif

lim inf

x→x

ϕ(x) − ϕ(x) − hx ∗ , x − xi

||x − x|| ≥ −ε.

(2.1)

Denote by ∂ˆ

ε ϕ(x) the set of the ε-subgradients of ϕ atx Clearly,∂ˆ

0 ϕ(x) ⊂ ˆ ∂εϕ(x) for everyε ≥ 0 The set∂ϕ(x) := ˆˆ ∂0ϕ(x)is called the Frechet subdifferential ofϕatx.

Definition 2.2 For a set-valued mappingF : X ⇒ X∗between a Banach spaceX and its dualX∗, the sequential Painleve-Kuratowski upper limit ofF (x)asx → xis defined by

Lim sup

x→x

F (x) := {x∗∈ X ∗ | ∃sequencexk → xandx∗k w

−→ x ∗

withx∗k∈ F (x k )for allk = 1, 2, },

wherew∗denotes the weak∗ topology inX∗

Definition 2.3 The limiting subdifferential (or the Mordukhovich/basic subdifferen-tial) ofϕatxis defined by setting

∂ϕ(x) :=Lim sup

x

ϕ

−→x

ε↓0

ˆ

∂ ε ϕ(x).

(2.2)

The singular subdifferential ofϕatxis given by

∂∞ϕ(x) :=Lim sup

x −→xϕ

ε,λ↓0

λ ˆ ∂ ε ϕ(x).

(2.3)

Trang 3

Remark 2.4 (see [4]) IfX is an Asplund space (i.e., such that its separable subspaces have separable duals) and ifϕis lower semicontinuous aroundx, then we can equiv-alently put ε = 0in (2.2) Moreover, we have∂ϕ(x) 6= ∅ for every locally Lipschitzian function

Definition 2.5 LetX be a Banach space,f : X → Ra Lipschitzian function aroundx The Clarke subdifferential off atxis the set

∂CLf (x) :=

(

x∗∈ X∗| hx∗, vi 6 lim sup

x 0 →x,t→0 +

f (x0+ tv) − f (x0)

t , ∀v ∈ X

) (2.4)

Remark 2.6 (see [4, Theorem 3.57]) For the Clarke subdifferential in Asplund spaces,

we have

∂CLf (x) =cl∗

co[∂f (x) + ∂∞f (x)], (2.5)

where ``co" denotes the convex hull and ``cl∗

" stands for the closure in the weak∗ topol-ogy ofX∗

Remark 2.7 (see [4]) Ifϕ : Rn → Ris strictly differentiable atx, then

∂CLϕ(x) = ∂ϕ(x) = {∇ϕ(x)}.

(2.6)

The domain and the graph of the mapF : X ⇒ Y are defined, respectively, by setting

domF := {x ∈ X | F (x) 6= ∅}, gphF := {(x, y) ∈ X × Y | y ∈ F (x)}.

3. Subgradients of the value function in smooth programming

problems

Consider (1.1) in the special case where the objective functionϕis smooth and the constraint set is given by

G(x) :=ny ∈ Y | ϕi(x, y) 6 0, i = 1, , m,

ϕi(x, y) = 0, i = m + 1, , m + ro, (3.1)

withϕ i : X × Y → R(i = 1, , m + r) being some given smooth functions Such problems are called smooth programming problems

Definition 3.1 The classical Lagrangian is defined by setting

L(x, y, λ) = ϕ(x, y) + λ 1 ϕ 1 (x, y) + · · · + λ m+r ϕ m+r (x, y), (3.2)

where the scalars λ 1 , , λ m+r (and also the vector λ := (λ 1 , , λ m+r ) ∈ R m+r) are the Lagrangian multipliers

Trang 4

Given a point(x, y) ∈gphM in the graph of the solutionM (·), we consider the set of Lagrange multipliers:

Λ(x, y) :=nλ ∈ Rm+r| L y (x, y, λ) := ∇ y ϕ(x, y) +

m+r

X

i=1

λ i ∇ y ϕ i (x, y) = 0,

λ i ≥ 0, λ i ϕ i (x, y) = 0 for i = 1, , mo (3.3)

Definition 3.2 We say that the Mangasarian-Fromovitz constraint qualification con-dition holds at(x, y)if

the gradients∇ϕ m+1 (x, y), , ∇ϕm+r(x, y)are linearly independent;

there isw ∈ X × Y such thath∇ϕi(x, y), wi = 0fori = m + 1, , m + r

(3.4)

andh∇ϕi(x, y), wi < 0wheneveri = 1, , mwithϕi(x, y) = 0.

Definition 3.3 We say that the solution mapM : domG ⇒ Y admits a local upper Lipschitzian selection at (x, y)if there exists a single-valued mapping h : domG → Y

which satisfiesh(x) = yand for which there are constants` > 0, δ > 0such thath(x) ∈ G(x)andkh(x) − h(x)k 6 `kx − xkfor allx ∈ domG ∩ Bδ(x) Here

B δ (x) := {x ∈ X | kx − xk < δ}.

The next statement follows from [6, Theorem 4.1]

Theorem 3.4 (Frechet subgradients of value functions in smooth nonlinear programs in Asplund spaces) Letµ(.)be defined by (1.2) Take x ∈ domM andy ∈

M (x)and assume that the gradients

∇ϕ1(x, y), , ∇ϕm+r(x, y) (3.5)

are linearly independent Then we have the inclusion

ˆ

∂µ(x) ⊂ [

λ∈Λ(x,y)

∇ x ϕ(x, y) +

m+r

X

i=1

λ i ∇ x ϕ i (x, y).

(3.6)

Furthermore,(3.6)reduces to the equality

ˆ

∂µ(x) = [

λ∈Λ(x,y)

∇ x ϕ(x, y) +

m+r

X

i=1

λi∇ x ϕi(x, y)  (3.7)

if the solution mapM : domG ⇒ Y admits a local upper Lipschitzian selection at(x, y) From [6, Corollary 4.3] we obtain the following result

Corollary 3.5 In the assumptions imposed in the first part of Theorem 3.4, suppose that the spaces X and Y are Asplund, and that the qualification condition(3.5)is re-placed by the (3.4) Then we have inclusion (3.6), which reduces to the equality (3.7)

Trang 5

provided that the solution mapM : domG ⇒ Y admits a local upper Lipschitzian selec-tion at(x, y)

Let us consider some examples of smooth programming problems illustrating the results obtained in Theorem 3.4 and Corollary 3.5 and the assumptions made therein

We start with examples showing that the upper Lipschitzian assumptions of Theorem 3.4 is essential but not necessary to ensure the equality in the Frechet subgradient in-clusion (3.6) For convenience, denote by ``RHS'' and ``LHS'' the expressions standing

on the right-hand side and left-hand side of inclusion (3.6), respectively

Example 3.6 (cf [2, Example 3.4]) LetX = R, Y = R 2 andx = 0, y = (0, 0) Consider the marginal functionµ(.)in (1.2) withϕ(x, y) = −y2, y = (y1, y2) ∈ G(x), where

G(x) :=ny = (y 1 , y 2 ) ∈ R2| ϕ 1 (x, y) = y 2 − y 2

1 6 0,

ϕ 2 (x, y) = y 2 + y2− x 6 0o.

Then we have

µ(x) =

−x ifx 6 0

−x

2 otherwise;

M (x) =ny = (y 1 , y 2 ) ∈ G(x) | y 2 =

x ifx 6 0 x

2 otherwise

o ,

Λ(x, y) = {(t, 1 − t) | 0 6 t 6 1}.

Furthermore,

∇ϕ 1 (x, y) = (0, 0, 1), ∇ϕ 2 (x, y) = (−1, 0, 1)

are linearly independent Hence RHS=[−1, 0] On the other hand, a direct computation based on (2.1) gives LHS=[−1, − 1

2 ],i.e., inclusion (3.6) is strictly Observe that the solu-tion mapM (.)as above does not admit any upper Lipschitzian selection at(x, y) This example shows that the latter assumption is essential for the validity of the equality

in (3.6) by Theorem 3.4

Example 3.7 Let X = Y = R andx = y = 0 Consider the marginal functionµ(.) in (1.2) with

ϕ(x, y) = (x − y2)2, G(x) = {y ∈ R | ϕ 1 (x, y) = −(1 + y)26 0}.

One can easily deduce from (1.2) and (1.3) that

µ(x) =

(

x2 ifx 6 0

0 otherwise; M (x) =

( {0} ifx 6 0 {−√x, √

x} otherwise ;

Λ(x, y) = {0}.

Furthermore, ∇ϕ 1 (x, y) = (0, −2) 6= (0, 0) Hence RHS={0} Besides, LHS=∂µ(0) = {0}ˆ Thus (3.6) holds as equality although the solution mapM (.)does not admit any upper Lipschitzian selection at(x, y) We have seen that the upper Lipschitzian assumption

is sufficient but not necessary for the equality assertion of Theorem 3.4

Trang 6

The next example shows that (3.4) is weaker than (3.5).

Example 3.8 Let X = R2, Y = R2 and x = (0, 0), y = (0, 0) Consider the marginal functionµ(.)in (1.2) withϕ(x, y) = −y 2 , y = (y 1 , y 2 ) ∈ G(x),

G(x) :=ny = (y1, y2) ∈ R2| ϕ1(x, y) = y2+ y14x1+ g(y1) 6 0

ϕ 2 (x, y) = y 2 − y 4

1 − x 2 6 0

ϕ 3 (x, y) = y21− 5 6 0

ϕ 4 (x, y) = −y 2 − 1 6 0o;

where

g(y1) =

(

0 ify 1 = 0

y 4 sin4 2πy

1 otherwise.

Then we have∇ϕ 1 (x, y) = (0, 0, 0, 1), ∇ϕ 2 (x, y) = (0, −1, 0, 1), ∇ϕ 3 (x, y) = (0, 0, 0, 0),

∇ϕ4(x, y) = (0, 0, 0, −1)are not linearly independent, i.e., the qualification condition of Theorem 3.4 is violated, but (3.4) is satisfied at(x, y), i.e., the results of Corollary 3.5 are applicable for this problem It is easy to find that

Λ(x, y) = {(t, 1 − t, 0, 0) | 0 6 t 6 1}.

Thus we have an upper estimate for the Frechet subdifferential of the value function

ˆ

∂µ(x) ⊂ {0} × [−1, 0].

The next theorem follows from [6, Corollary 5.4]

Theorem 3.9 (Limiting subgradients of value functions in smooth nonlinear programs in Asplund spaces) LetM (.)be the solution mapping from(1.3)with the constraint mappingG(.)defined by(3.1), where both spaces X and Y are Asplund Sup-pose that the Mangasarian-Fromovitz constraint qualification(3.4)is satisfied Then one has the inclusions

∂µ(x) ⊂ [

λ∈Λ(x,y)

∇ x ϕ(x, y) +

m+r

X

i=1

λi∇xϕi(x, y), (3.8)

∂∞µ(x) ⊂ [

λ∈Λ(x,y)



m+r

X

i=1

λi∇ x ϕi(x, y), (3.9)

where the set of multipliersΛ(x, y)is defined in(3.3)and where

Λ∞(x, y) :=nλ ∈ Rm+r|

m+r

X

i=1

λ i ∇ y ϕ i (x, y) = 0, λ i ≥ 0, λ i ϕ i (x, y) = 0fori = 1, , mo.

Moreover,(3.8)holds as equality ifM (·)admits a local upper Lipschitzian selection at

(x, y)

Trang 7

Similarly as in Theorem 3.4, the upper Lipschitzian assumption of Theorem 3.9 is sufficient but not necessary to ensure the equality in the inclusion (3.8)

Observe that∂∞µ(x) = {0}due to (3.9) ifϕ isatisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect toy, i.e., when the full gradients ofϕ iin (3.4) are replaced by∇ y ϕ i (x, y)

By the representation (2.6), the results obtained in Theorem 3.9 immediately im-plies an upper estimate for the Clarke subdifferential of the value function in smooth programming, which extends the well-known results of Gauvin and Dubeau [1, Theo-rem 5.3] established in finite dimensions

4. Application to Gauvin-Dubeau's examples

Let us apply the results of Theorems 3.4, 3.9 and Corollary 3.5 to compute or esti-mate the Frechet, the limiting, and the Clarke subdifferentials of the value function

in the ``pathological'' examples from Gauvin and Dubeau [2]

Example4.1 (see [2, Example 2.1]) LetX = Y = R.Consider the problem

minimize ϕ(x, y) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ 1 (x, y) = g(y) − x 6 0},

where

g(y) =

−(y + 1

2 ) 2 +5

4 ify 6 0

We haveµ(x) =inf{ ϕ(x, y) = −y | y ∈ G(x)} One can find that

G(x) =

(−∞, − 1

2 −q5

4 − x] ∪ [− 1

2 +q5

4 − x, +∞) if1 6 x < 5

4

(−∞, −12−q5

4 − x] ∪ [− ln x + ∞) if0 < x < 1 (−∞, − 1

2 −q5

µ(x) = (−∞ ifx > 0

1

2 +

q

5

4 − x otherwise;

M (x) =

( {y ∈ R | y = − 1

2 −q5

4 − x} ifx 6 0

Letx = 0 andy = −12−

√ 5

2 Note that∇ϕ 1 (x, y) = (−1, √

5) 6= (0, 0)and the solution map

M (·)does not admit any upper Lipschitzian selection at(0, −12−

√ 5

2 ) From (3.3) it follows thatΛ(x, y) = { √1

5 } Hence, applying the results of Theorem 3.4 we obtain∂µ(x) ⊂ {ˆ −1√

5 }.

Similarly, using Theorem 3.9 instead of Theorem 3.4, we get∂µ(x) ⊂ {−1√ }.

Trang 8

Remark 1 A direct computation based on (2.1) and (2.2) gives

ˆ

∂µ(x) = ∂µ(x) = ∅.

Example4.2 (see [2, Example 2.2]) LetX = R, Y = R 2 Consider the problem

minimize ϕ(x, y) = −y 2 subject to y = (y 1 , y 2 ) ∈ G(x),

G(x) = {y = (y 1 , y 2 ) ∈ R2| ϕ 1 (x, y) = (y21+ (y 2 − 1) 2 −1

4)(y

2

1 + (y 2 + 1)2− 1) 6 0

ϕ2(x, y) = y1− x = 0, x ≥ 0}.

We have

µ(x) =

−1 −q1

4 − x 2 if0 6 x 6 1

2

1 − √

1 − x 2 if 1

2 < x 6 1 +∞ ifx > 1;

M (x) =

 {y = (y 1 , y 2 ) ∈ G(x), y 2 =

(

1 +q14− x 2 if0 6 x 6 12

−1 +√1 − x 2 if 1

2 < x 6 1

}

Forx := 12, y := (12, 1) ∈ M (x), we see that

∇ϕ 1 (x, y) = (0,13

4 , 0), ∇ϕ2(x, y) = (−1, 1, 0)

are linearly independent, andΛ(12,12, 1) = ∅.Hence, by Theorem 3.4 we obtain

ˆ

∂µ(x) = ∅.

Similarly, using Theorem 3.9 instead of Theorem 3.4, we get

∂µ(x) = ∅.

Taking into account the fact that

∂CLµ(x) = co[∂µ(x) + ∂∞µ(x)],

we obtain

∂CLµ(x) = ∅.

Forx := 1, y := (1, −1) ∈ M (x), we obtain

ˆ

∂µ(x) = ∂µ(x) = ∂CLµ(x) = ∅.

Example4.3 (see [2, Example 3.1]) TakingX = Y = R,we consider the problem

minimize ϕ(x, y) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ 1 (x, y) = y3− x 6 0}.

We have

G(x) = {y ∈ R | y ∈ (−∞, √ 3

x]};

µ(x) = − √ 3

x;

M (x) = {y ∈ R | y = √ 3

x}.

Trang 9

For x := 0, y := 0, we see that ∇ϕ 1 (x, y) = (−1, 0) and Λ(0, 0) = ∅. Hence, applying Theorem 3.4 we obtain

ˆ

∂µ(x) = ∅.

Similarly, using Theorem 3.9 instead of Theorem 3.4, we get

∂µ(x) = ∅.

Taking into account the fact that∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain

∂CLµ(x) = ∅.

Example4.4 (see [2, Example 3.2]) TakingX = R, Y = R2, we consider the problem

minimize ϕ(x, y) = −y 2 subject to y = (y 1 , y 2 ) ∈ G(x), G(x) = {y = (y1, y2) ∈ R2| ϕ1(x, y) = y21− 100 6 0

ϕ 2 (x, y) = g(y 1 ) − y 2 = 0

ϕ 3 (x, y) = (y18− x)y 2 = 0},

where

g(y 1 ) =

(

0 ify1= 0

y 4 cos(2πy

1 ) otherwise

One can find that

G(x) =

( {(y 1 , 0) ∈ R2, y 1 = 0ory 1 =2k+14 , k = 0, ±1, ±2, } ifx 6 0 G(0) ∪ {(± √ 8

x, g( √ 8

x)) | √ 4

x 6 100} otherwise;

µ(x) =

(

min{0, −g( √ 8

x)} otherwise;

M (x) =

(

{(y 1 , 0) ∈ R 2 y 1 = 0ory 1 = 2k+14 , k = 0, ±1, ±2, } ifx 6 0

{(y1, y2) ∈ G(x), y2= max{0, g( √ 8

Forx := 0, y := (2k+14 , 0), k = 0, ±1, ±2, withk = 2n, n = 0, ±1, ±2, Note that

∇ϕ 1 (x, y) = (0, 8

4n + 1, 0), ∇ϕ2(x, y) = (0,

32π (4n + 1) 2 , −1), ∇ϕ3(x, y) = (0, 0, ( 4

4n + 1)

8 )

are linearly independent, and the solution mapM (.)does not admit any upper Lips-chitzian selection at(0,4n+14 , 0).From (3.3) it follows that get

Λ(0, 4 4n + 1, 0) = {(0, 0, (n +

1

4)

8 )}.

Hence, applying the results of Theorem 3.4 we obtain

ˆ

∂µ(x) ⊂ {0}.

Similarly, withk = 2n + 1, n = 0, ±1, ±2, , we obtain∂µ(x) ⊂ {0}ˆ

Using (3.8) and (3.9) in Theorem 3.9 and similarly computing as above we get

∂µ(x) ⊂ {0}, ∂∞µ(x) ⊂ {0}.

Trang 10

Taking into account the fact that∂ CL µ(x) = co[∂µ(x) + ∂∞µ(x)],we obtain

∂CLµ(x) ⊂ {0}.

Remark 2 Applying (2.1) and (2.2) again, we see that 0 neither belongs to∂µ(x)ˆ nor

to∂µ(x), i.e., ∂µ(x) = ∂µ(x) = ∅.ˆ This implies ∂CLµ(x) = ∅ Therefore, (3.6) and (3.8) hold strict inclusions The inclusion (3.9) holds as equality (i.e.,∂∞µ(x) = {0}) because

ϕ i , i = 1, 2, 3satisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect toy

Example4.5 (see [2, Example 3.3]) LetX = R 2

, Y = R 2 Consider the problem minimize ϕ(x, y) = −y2 subject to y = (y1, y2) ∈ G(x),

where

G(x) = {y = (y 1 , y 2 ) ∈ R2| ϕ 1 (x, y) = y 2 + y14x 1 + g(y 1 ) 6 0

ϕ 2 (x, y) = y 2 − y 4

1 − x 2 6 0

ϕ 3 (x, y) = y12− 5 6 0

ϕ 4 (x, y) = −y 1 − 1 6 0},

g(y 1 ) =

(

0 ify1= 0

y 4 sin4(2πy

1 ) otherwise

Forx := (0, 0), the optimal solution set is

M (x) = {(0, 0)} ∪ {(2

k, 0) | k = ±1, ±2, }.

Fory := (k2, 0), k = ±1, ±2, , we see that

∇ϕ 1 (x, y) = (k164 , 0, 0, 1), ∇ϕ 2 (x, y) = (0, −1,−32k3 , 1),

∇ϕ 3 (x, y) = (0, 0,4k, 0), ∇ϕ 4 (x, y) = (0, 0, 0, −1)

are linearly independent Besides,Λ(0, 0, 2

k , 0) = {(1, 0, 0, 0)}.It follows from Theorem 3.4 that

ˆ

∂µ(x) ⊂ {(16

k 4 , 0)}.

Using (3.8) and (3.9) and performing a similar computation as above, we get

∂µ(x) ⊂ {(16

k 4 , 0)}, ∂∞µ(x) ⊂ {(0, 0)}.

Since∂ CL µ(x) = co[∂µ(x) + ∂∞µ(x)],we have

∂CLµ(x) ⊂ {(16

k 4 , 0)}.

Fory := (0, 0), we see that

∇ϕ 1 (x, y) = (0, 0, 0, 1), ∇ϕ 2 (x, y) = (0, −1, 0, 1),

∇ϕ (x, y) = (0, 0, 0, 0), ∇ϕ (x, y) = (0, 0, 0, −1).

Ngày đăng: 23/07/2014, 14:21

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w