1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Optimal adaptive sampling recovery

41 126 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 659,33 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

DSpace at VNU: Optimal adaptive sampling recovery tài liệu, giáo án, bài giảng , luận văn, luận án, đồ án, bài tập lớn v...

Trang 1

DOI 10.1007/s10444-009-9140-9

Optimal adaptive sampling recovery

Dinh D ˜ung

Received: 29 April 2009 / Accepted: 25 August 2009 /

Published online: 16 September 2009

© Springer Science + Business Media, LLC 2009

Abstract We propose an approach to study optimal methods of adaptive

sampling recovery of functions by sets of a finite capacity which is measured by

their cardinality or pseudo-dimension Let W ⊂ L q , 0 < q ≤ ∞, be a class of

functions onId := [0, 1] d For B a subset in L q , we define a sampling recovery

method with the free choice of sample points and recovering functions from

B as follows For each f ∈ W we choose n sample points This choice defines

n sampled values Based on these sampled values, we choose a function from

B for recovering f The choice of n sample points and a recovering function from B for each f ∈ W defines a sampling recovery method S B

n by functions

in B An efficient sampling recovery method should be adaptive to f Given a

familyB of subsets in L q, we consider optimal methods of adaptive sampling

recovery of functions in W by B from Bin terms of the quantity

that the cardinality of B does not exceed 2 n , and by r n (W) qifBis the family

of all subsets B in L q of pseudo-dimension at most n Let 0 < p, q, θ ≤ ∞

andα satisfy one of the following conditions: (i) α > d/p; (ii) α = d/p, θ ≤

min(1, q), p, q < ∞ Then for the d-variable Besov class U α

Information Technology Institute, Vietnam National University,

Hanoi 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam

e-mail: dinhdung@vnu.edu.vn

Trang 2

To construct asymptotically optimal adaptive sampling recovery methods

for e n (U α

p ,θ ) q and r n (U α

p ,θ ) q we use a quasi-interpolant wavelet tion of functions in Besov spaces associated with some equivalent discretequasi-norm

representa-Keywords Adaptive sampling recovery · Quasi-interpolant wavelet

representation· B-spline · Besov space

Mathematics Subject Classifications (2000) 41A46 · 41A05 · 41A25 · 42C40

1 Introduction

We are interested in problems of sampling recovery of functions defined on

the unit d-cubeId := [0, 1] d Let L q := L q (Id ), 0 < q ≤ ∞, denote the

quasi-normed space of functions onId with the usual qth integral quasi-norm · q

for 0< q < ∞, and the normed space C(Id ) of continuous functions onIdwiththe max-norm · ∞for q= ∞ We consider sampling recoveries of functions

from a class of a certain smoothness W ⊂ L q by functions from a subset B

in L q The recovery error will be measured in the norm · q We will focus

our attention to optimal methods of adaptive sampling recovery of functions

in W by subsets B of a finite capacity which is measured by their cardinality

or pseudo-dimension Let us first recall some well-known sampling recoverymethods

Suppose that f is a function in W and ξ = {x k}n

k=1are n points inId We want

to approximately recover f from the sampled values f (x1), f(x2), , f(x n ) A

general sampling recovery method can be defined as

k=1are given n functions onId

To study optimal sampling methods of recovery for f ∈ W from n their

values, we can use the quantity

g n (W) q := inf

H ,ξ supf ∈W  f − R n (H, ξ, f ) q ,

where the infimum is taken over all sequencesξ = {x k}n

k=1and all mappings H

Trang 3

as the unit ball of the Besov space B α p ,θ , having a fractional smoothness α > 0

(the definition of this space is given in Section2) Notice that in problems ofsampling recovery, other classes such as well-known Sobolev and Lizorkin-Triebel classes, etc can be considered (see [22])

We use the notations: x+:= max(0, x) for x ∈R; A n ( f )  B n ( f ) if

A n ( f ) ≤ CB n ( f ) with C an absolute constant not depending on n and/or

f ∈ W, and A n ( f )  B n ( f ) if A n ( f )  B n ( f ) and B n ( f )  A n ( f ).

It is known the following result (see [10,19,21,22,26] and references there).Let 0< p, q ≤ ∞, 0 < θ ≤ ∞ and α > d/p Then there is a linear sampling recovery method Lnof the form (1.2) such that

 f − L

n ( f ) q  n −α/d+(1/p−1/q)+. (1.3)

This result says that the linear sampling recovery method Lnis asymptotically

optimal in the sense that any sampling recovery method R nof the form (1.1)

does not give the rate of convergence better than Ln

Sampling recovery methods of the form (1.1) which may be linear ornon-linear are non-adaptive, i.e., the pointsξ = {x k}n

k=1 at which the values

f (x1), , f(x n ) are sampled, and the recovery method R n are the same for

all functions f ∈ W Let us introduce a setting of adaptive sampling recovery

which will give the asymptotic order of the recovery error better than the adaptive sampling recovery in some cases

non-Let B be a subset in L q We will define a sampling recovery method with the free choice of sample points and recovering functions from B Roughly speaking, for each f ∈ W we choose a set of n sample points This choice defines a collection of n sampled values Based on the information of these sampled values, we choose a function from B for recovering f The choice of

n sample points and a recovering function from B for each f ∈ W defines a sampling recovery method S n B by functions in B Let us give a precise notion

of S B n

Denote by I n the set of subsetsξ in Id of cardinality at most n Let V n

be the set whose elements are collections of real numbers a ξ = {a(x)} x ∈ξ ,

ξ ∈ I n , a(x) ∈R(for a ξ , b ηV n , we write by definition a ξ = b η if and only

ifξ = η and a(x) = b(x) for any x ∈ ξ) Let I n be a mapping from W into I n

and P a mapping from V n into B Then the pair (I n , P) generates the mapping

S B n from W into B by the formula

P can be treated as a mapping H fromRn into L q

We want to choose a sampling recovery method S B

n so that the error ofthis recovery f − S B ( f ) qis as smaller as possible Clearly, such an efficient

Trang 4

choice should be adaptive to f The error of an optimal adaptive sampling recovery method for each f ∈ W, is measured by

R n B ( f ) q:= inf

S B n

Given a familyB of subsets in L q, we consider optimal sampling recoveries by

B from Bin terms of the quantity

R n (W, B ) q:= inf

B ∈B R

B

We assume a restriction on the sets BB, requiring that they should have,

in some sense, a finite capacity In the present paper, the capacity of B is

measured by its cardinality or pseudo-dimension This reasonable restriction

would provide nontrivial lower bounds of asymptotic order of R n (W, B ) qfor

well known function classes W Denote R n (W, B ) q by e n (W) q ifBin (1.5) is

the family of all subsets B in L q such that|B| ≤ 2 n, where |B| denotes the cardinality of B , and by r n (W) qifBin (1.5) is the family of all subsets B in L q

of pseudo-dimension at most n.

The quantity e n (W) q is related to the entropy n-width (entropy number)

ε n (W) qwhich is the functional inverse of the classicalε-entropy introduced by

Kolmogorov and Tikhomirov [18] The quantity r n (W) qis related to the

non-linear n-width ρ n (W) qintroduced recently by Ratsaby and Maiorov [24] (Seethe definition ofε n (W) qandρ n (W) qinAppendix)

The pseudo-dimension of a set B of real-valued functions on a set defined as follows For a real number t, let sgn (t) be 1 for t > 0 and −1 otherwise For x∈Rn, let sgn(x) = (sgn(x1), sgn(x2), , sgn(x n )) The pseudo- dimension of B is defined as the largest integer n such that there exist points

a1, a2, , a nin Rnsuch that the cardinality of the set

an important role in theory of pattern recognition and regression estimation,empirical processes and computational learning theory Thus, in the probably

approximately correct (PAC) learning model, if B is a set of real-valued

functions on

distribution on

accuracyε and probability 1 − δ by just knowing its values at m randomly

Trang 5

sample points from 24,25]) If B is a n-dimensional linear manifold of real-valued functions on p(B) = n

(see [16])

We say that p , q, θ, α satisfy Condition (1.6) if

0< p, q, θ ≤ ∞, α < ∞, and there holds one of the following restrictions: (i) α > d/p;

The main results of the present paper are read as follows

Theorem 1.1 Let p , q, θ, α satisfy Condition (1.6) Then for the d-variable Besov class U α p ,θ , there is the following asymptotic order

n () in terms of the quantity

s n (W, ) q := R n ()

n (W) q The quantity s n (W, ) q has been introduced in [14] in another equivalentform (with the notationν n (W, ) q ) Let us recall it For each function f ∈ W,

we choose a sequenceξ = {x s}n

s=1 of n points inId , a sequence a = {a s}n

s=1 of

n functions onRn and a sequence n = {ϕ k s}n

s=1 of n functions from  This

choice defines a sampling recovery method given by

The optimal adaptive sampling recovery in terms of the quantity s n (W, ) q

is related to the quantityσ n (W, ) q of non-linear n-term approximation which characterizes the approximation of W by functions from n () (see the

definition in Appendix) The reader can find in [7, 27] surveys on various

Trang 6

aspects of this approximation and its applications Let us recall some results

in [15] on adaptive sampling recovery in regard to the quantity s n (W, ) q

For a given natural number r , let M be the centered B-spline of even order

2r with support[−r, r] and knots at the integer points −r, , 0, , r, and define

B-spline wavelets

M k,s (x) := M(2 k x − s), for a non-negative integer k and s∈Z Then M is the set of all M k ,swhich do

not vanish identically onI The following result was proven in [15]

Let 1≤ p, q ≤ ∞, 0 < θ ≤ ∞, and 1 < α < min(2r, 2r − 1 + 1/p) Then for the the univariate Besov class U α p ,θ, there is the following asymptotic order

p ,θ , M) qwhich gives the upper bound of (1.7) we used the following

quasi-interpolant wavelet representation of functions in the Besov space B α p ,θ

in terms of the B-spline wavelet system M associated with some equivalent

discrete quasi-norm If 1≤ p ≤ ∞, 0 < θ ≤ ∞, and 1 < α < min(2r, 2r − 1 +

1/p), then a function f in the Besov space B α

p ,θcan be represented as a series

with the convergence in B α p ,θ , where J (k) is the set of s for which M k,sdo not

vanish identically onI, and c k,s ( f ) are functions of a finite number of values of

f which does not depend on neither k , s nor f Moreover, the quasi-norm of

B α p ,θ is equivalent to the discrete quasi-norm

con-In the present paper, we also extend (1.7) to the case 0< p, q ≤ ∞ and

α ≥ d/p, and generalize it for multivariate functions on the d-cube Id In

particular, important is the case 0< p < 1 or 0 < q < 1 which are of great

interest in non-linear approximations (see [7,9]) To get d-variable B-spline

Trang 7

M k,s (x) := M(2 k x − s), for a non-negative integer k and s∈Zd Denote again by M the set of all M k ,s

which do not vanish identically onId We prove the following theorem.

Theorem 1.2 Let p , q, θ, α satisfy Condition (1.6) and α < min(2r, 2r − 1 +

1/p) Then for the d-variable Besov class U α

p,θ , there is the following asymptotic

upper bound for e n (U α

Notice that the quantities e n (W) q and r n (W) q are absolute in the sense of

optimal sampling recovery methods, while the quantity s n (W, ) qdepends on

a system However, Theorems 1.1 and 1.2 show that e n (U α

on sampling recovery considered only the caseα > d/p In Theorems 1.1 and

1.2, we receive some results also for the caseα = d/p, θ ≤ min(1, q), 0 < p,

q < ∞ of the Besov class U α

p ,θ.

In the present paper, we consider optimal adaptive sampling recoveries forthe Besov class of multivariate functions Results similar to Theorems 1.1 and1.2 are also true for the Sobolev and Lizorkin-Triebel classes of multivariatefunctions

The paper is organized as follows

In Section2, we give a definition of quasi-interpolant form functions on

Id, construct a quasi-interpolant wavelet representation in terms of the

B-spline dictionary M for Besov spaces and prove some quasi-norm equivalences

based on this representation, in particular, a discrete quasi-norm in terms of

Trang 8

the coefficient functionals In Sections 3 and 4, we prove Theorem 1.1 InSection3, we prove the asymptotic order of r n (U α

p ,θ ) q in Theorem 1.1 and of

s n (U α

p ,θ , M) q in Theorem 1.2 and construct asymptotically optimal adaptive

sampling recovery methods which give the upper bound for r n (U α

p ,θ ) q and

s n (U α

p ,θ , M) q In Section 4, we prove the asymptotic order of e n (U α

p ,θ ) q inTheorem 1.1 and construct asymptotically optimal adaptive sampling recovery

methods which give the upper bound for e n (U α

p ,θ ) q InAppendixin Section5,

we give some auxiliary notions and results on non-linear approximations whichare employed, in particular, in establishing the lower bounds in Theorems 1.1and 1.2

2 Quasi-interpolant wavelet representations in Besov spaces

Let  = {λ( j)} j ∈P d (μ) be a finite even sequence, i.e., λ(− j) = λ( j), where

P d (μ) := { j ∈Zd : | j i | ≤ μ, i = 1, 2, , d} We define the linear operator Q for functions f onRdby

Moreover, Q is local in the following sense There is a positive number

δ > 0 such that for any f ∈ C(Rd ) and x ∈Rd , Q( f, x) depends only on the value f (y) at a finite number of points y with |y i − x i | ≤ δ, i = 1, 2, d We will require Q to reproduce the space P d

2r−1 of polynomials of order at most

2r− 1 in each variable x i, that is,

construc-p 100–109]) De Bore and Fix [5] introduced another quasi-interpolant based

on the values of derivatives The reader can see also the books [3,6] for surveys

on quasi-interpolants

Trang 9

Let d be a d-cube in Rd Denote by L p

0

the max-norm ·  for p= ∞

If τ be a number such that 0 < τ ≤ min(p, 1), then for any sequence of

functions{ f k} there is the inequality



f (x + jh).

For 0< p, θ ≤ ∞ and 0 < α < l, the Besov space B α

p ,θis the set of functions

f ∈ L pfor which the Besov quasi-semi-norm| f| B α p,θis finite The Besov semi-norm| f| B α p,θ is given by

We will assume that continuous functions to be recovered are from the

Besov space B α p,θ with the restriction on the smoothness α ≥ 1/p which is a condition for the embedding of this space into C (Id ).

If { f k}∞

k=0 is a sequence whose component functions f k are in L p , for

0< p, θ ≤ ∞ and β ≥ 0 we use the b β θ (L p ) “quasi-norms”

Trang 10

by{ f k}b β θ We will need the following discrete Hardy inequality Let {a k}∞

with C = C(β, θ) (see, e.g, [8])

For the Besov space B α p ,θ , there is the following quasi-norm equivalence

If Q of is a quasi-interpolant of the form (2.1–2.2), for h > 0 and a function f

onRd , we define the operator Q hby

The operator Q (·; h) has the same properties as Q: it is a local bounded

linear operator inRdand reproduces the polynomials fromP d

its sampled values at points inId An approach to construct a quasi-interpolant

for a function onIdis to extend it by interpolation Lagrange polynomials Thisapproach has been proposed in [15] for the univariate case Let us recall it

For a non-negative integer m , we put x j = j2 −m , j ∈Z If f is a function

Trang 11

intervalI, respectively The function ¯f is defined as an extension of f onRbythe formula

Obviously, if f is continuous onI, then ¯f is a continuous function onR Let Q

be a quasi-interpolant of the form (2.1–2.2) in C (R) We introduce the operator

Q mby putting

Q m ( f, x) = Q( ¯f, x; 2 −m ), x ∈I, for a function f onI We have

Q m ( f, x) = 

s ∈J(m)

a m,s ( f )M m,s (x), ∀x ∈I, (2.8)

where J (m) := {s ∈Z: −r < s < 2 m + r} is the set of s for which M m,sdo not

vanish identically onI, and

a m,s ( f ) := ( ¯f, s; 2 −m ) = 

| j|≤μ

λ( j) ¯f(2 −m (s − j)). (2.9)

The operator Q m is called a quasi-interpolant for C (I).

We now give a multivariate generalization of the univariate

quasi-interpolant Q m For this purpose we rewrite the coefficient functionals a m,s ( f ) for the definition of Q m , in a more suitable form Let b be a function of discrete variable k ∈ Z(m) where Z(m) := {s ∈Z: 0 ≤ s ≤ 2 m } For non- negative integer l , put Z(m, l) := {s ∈Z: −l < s < 2 m + l} We extend b to

the function Ext(b) on Z(m, r + μ) by the formula



b (k + j).

A function f onIdefines a function b f on Z (m) by b f (k) := f(2 −m k ) From

(2.6), (2.7) and (2.10) it is easily to see that

Ext(b f , k) := ¯f(2 −m k ),

Trang 12

and consequently, we can rewrite the coefficient functionals a m,s ( f ) given in

Moreover, the number of the terms in Q m ( f ) is of the size ≈ 2 dm

Similar to the quasi-interpolants Q and Q (·; h), the operator Q mis a local

bounded linear mapping in C (Id ) and reproducing P d

Trang 13

set dyadic cubes I of the size 2 −mwhich are contained in

of s for which M m,sdo not vanish identically on

with a constant C depending on r , d, p only.

If 0< p ≤ ∞, for all non-negative integers m and all functions

If 0< p, θ ≤ ∞ and 0 < α < min(2r, 2r − 1 + 1/p), then for a function f on

Id belongs to the Besov space B α p,θ if and only if f can be represented by the

Trang 14

Moreover, the Besov quasi-norm B ( f ) is equivalent to one of the quasi-norms

B i ( f ), i = 2, 3, 4, where

B3( f ) := { f − P k ( f )} b α θ (L p ) +  f p ,

B4( f ) := {E k ( f ) p}b α θ +  f p Let us recall some well-known embeddings of spaces B α p ,θ For 0< p, q,

θ ≤ ∞, α > 0 and α > δ := d(1/p − 1/q)+, there is the inequality

 f B α−δ q,θ ≤ C f B α p ,θ , (2.22)

and consequently, B α p ,θ is continuously embedded into B α−δ q ,θ Further, if

0< p, θ ≤ ∞ and α ≥ d/p, then B α

p ,θ is continuously embedded into C (Id ),

and there is the inequality

This means that a function in B α p ,θ will be continuous by correcting its values

in a set of zero measure In this sense, we will consider B α p ,θ withα ≥ d/p, as a subset of C (Id ) Notice also that the inequality α > d/p provides the compact embedding of B α p ,θ into C (Id ).

For a I = I s ∈ D(m), let ˜I = ˜I s be the d-cube which is the union of the d-cubes I j ∈ D(m), j ∈ Z s (μ + r), where Z s (l) := { j ∈ J(m) : | j i − s i | ≤ l,

i = 1, 2, , d}.

Lemma 2.1 Let 0 < p ≤ ∞ and I ∈ D(m) Then for any continuous function f

onId , we have

Q m ( f ) ∞,I ≤  f ∞, ˜I , and, if in addition, f ∈ (m), we have

Trang 15

Q m ( f ) ∞,I ≤  max

j ∈Z s (μ+r)  f ∞,I j =  f ∞, ˜I (2.24)

If in addition, f ∈ (m), then from (2.24) it follows that

Q m ( f ) p ,I s ≤ |I s|1/p Q m ( f ) ∞,I s ≤ |I s|1/p max

j ∈Z s (μ+r)  f ∞,I j Since f is a polynomial on each I j , j ∈ Z s (μ + r), there is the inequality

with some constant C depending on r , μ, d and  only.

Proof Let f ∈ (k) and k ≥ m We have

Trang 16

= C p2d(k−m)  f p

p

Let Id is a d-cube We will need the following modified modulus of

with constants C1, C2which depend on l , p, d only (see [9])

Lemma 2.3 Let 0 < p ≤ ∞ and f ∈ L p Then we have

P m ( f ) − Q m (P m ( f )) p ≤ Cω2r ( f, 2 −m ) p

with some constant C depending on r , μ, p, d and  only.

Proof Below we will denote by C j a constant depending at most on r , μ, p, d

and In order to prove this lemma we will use Lemma 2.1 and a technique

in the proof of Theorem 4.5 in [9] For a d-cube

Trang 17

Let I s ∈ D(m), and g be the polynomial of best L p ( ˜I s ) approximation by

Letϕ be any polynomial in P d

2r−1 There holds the inequality

Trang 18

Using the last estimation and (2.26) we derive that

Lemma 2.4 Let 0 < p ≤ ∞ and θ ≤ min(p, 1) Then for any f ∈ L p , there holds the inequality

with some constant C depending at most on r , μ, p and , whenever the sum

in the right-hand side is finite.

Proof Let f ∈ L p be a function such that the sum in the right-hand side of(2.28) is finite We have by (2.3)

Trang 19

We obtain by (2.17)

 f − P m ( f ) p ≤ Cω2r ( f, 2 −m ) p , (2.30)and by Lemma 2.3

The following corollary is immediately implied from the last lemma

Corollary 2.1 Let 0 < p, q, θ ≤ ∞, 0 < α = d/p < 2r, θ ≤ min(1, q) Then for any f ∈ B α

Trang 20

Further, from the inequality q k ( f) p  f −Q k ( f) p + f −Q k−1( f) p ,

Combining (2.34–2.37) completes the proof of Theorem 2.1 

According to Theorem 2.1, a function f ∈ B α

p ,θ has the decomposition

de-in terms of the B-splde-ines M k ,s ∈ M, and an associated discrete equivalent

quasi-norm for the functional coefficients By using the B-spline refinement

equation, one can represent the component functions q k ( f ) as

q k ( f ) = 

s ∈J(k)

where c k ,s are certain coefficient functionals of f , which are defined as follows.

For the univariate case, we put

...

choice defines a sampling recovery method given by

The optimal adaptive sampling recovery in terms of the quantity s n (W, ) q

is related to the quantityσ... 6

aspects of this approximation and its applications Let us recall some results

in [15] on adaptive sampling recovery in regard to the quantity...

Notice that the quantities e n (W) q and r n (W) q are absolute in the sense of

optimal sampling recovery methods, while

Ngày đăng: 14/12/2017, 16:41

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
2. Birman, M.S., Solomjak, M.Z.: Piecewise-polynomial approximations of the class W α p . Math.USSR-Sb. 2(3), 295–317 (1967) Sách, tạp chí
Tiêu đề: W"α"p
11. Dung, D: On nonlinear n-widths and n-term approximation. Vietnam J. Math. 26, 165–176 (1998) Sách, tạp chí
Tiêu đề: n"-widths and "n
12. Dung, D: Continuous algorithms in n-term approximation and non-linear n-widths. J. Approx.Theory 102, 217–242 (2000) Sách, tạp chí
Tiêu đề: n"-term approximation and non-linear "n
17. Haussler, D.: Sphere packing number for subsets of the Boolean n-cube with bounded Vapnik–Chervonekis dimension. J. Comb. Theory, Ser. A 69, 217–232 (1995) Sách, tạp chí
Tiêu đề: n
18. Kolmogorov, A.N., Tikhomirov, V.M.: ε -entropy and ε -capacity of sets in function space Sách, tạp chí
Tiêu đề: ε"-entropy and"ε
1. Besov, O.V., Il’in, V.P., Nikol’ skii, S.M.: Integral representations of functions and embedding theorems. Winston &amp; Sons, Washington D.C.; Halsted Press [John Wiley &amp; Sons], New York- Toronto, Ont.- London (1978, vol. I), (1979, vol. II) Khác
4. Chui, C.K., Diamond, H.: A natural formulation of quasi-interpolation by multivariate splines.Proc. Am. Math. Soc. 99, 643–646 (1987) Khác
5. de Boor, C., Fix, G.J.: Spline approximation by quasiinterpolants. J. Approx. Theory 8, 19–45 (1973) Khác
6. de Bore, C., Hửllig, K., Riemenschneider, S.: Box Spline. Springer, Berlin (1993) 7. DeVore, R.A.: Nonlinear approximation. Acta Numer. 7, 51–150 (1998) Khác
10. Dung, D: On interpolation recovery for periodic functions. In: Koshi, S. (ed.) Functional Analysis and Related Topics, pp. 224–233. World Scientific, Singapore (1991) Khác
13. Dung, D: Asymptotic orders of optimal non-linear approximations. East J. Approx. 7, 55–76 (2001) Khác
14. Dung, D: Non-linear approximations using sets of finite cardinality or finite pseudo-dimension.J. Complex. 17, 467–492 (2001) Khác
15. D ˜ung, D: Non-linear sampling recovery based on quasi-interpolant wavelet representations.Adv. Comput. Math. 30, 375–401 (2009) Khác
16. Haussler, D.: Decision theoretic generalization of the PAC model for neural net and other learning applications. Inf. Comput. 100(1), 78–150 (1982) Khác
19. Kydryatsev, S.N.: The best accuracy of reconstruction of finitely smooth functions from their values at a given number of points. Izv. Math. 62, 19–53 (1998) Khác
20. Nikol’skii, S.: Approximation of Functions of Several Variables and Embedding Theorems.Springer, Berlin (1975) Khác
21. Novak, E.: Deterministic and Stochastic Error Bounds in Numerical Analysis. In: Lecture Notes in Mathematics, vol. 1349. Springer, Berlin (1988) Khác
22. Novak, E., Triebel, H.: Function spaces in Lipschitz domains and optimal rates of convergence for sampling. Constr. Approx. 23, 325–350 (2006) Khác
23. Pollard, D.: Empirical processes: theory and applications. In: NSF-CBMS Regional Con- ference Series in Probability and Statistics, vol. 2. Inst. Math., Stat. and Ann. Stat. Assoc., Providence (1989) Khác