1. Trang chủ
  2. » Công Nghệ Thông Tin

patel, chellapa - sparse representations and compressive sensing for imaging and vision

107 424 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Sparse Representations and Compressive Sensing for Imaging and Vision
Tác giả Vishal M. Patel, Rama Chellappa
Trường học University of Maryland
Chuyên ngành Electrical and Computer Engineering
Thể loại monograph
Năm xuất bản 2013
Thành phố College Park
Định dạng
Số trang 107
Dung lượng 3,61 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Compressive sensingexploits the property that the sensed signal is often sparse in some transformdomain in order to recover it from a small number of linear, random, multiplexedmeasureme

Trang 1

SpringerBriefs in Electrical and Computer Engineering

For further volumes:

http://www.springer.com/series/10059

Trang 2

Sparse Representations and Compressive Sensing for Imaging and Vision

123

Trang 3

Automation ResearchA.V Williams BuildingUniversity of MarylandCollege Park, MD

ISSN 2191-8112 ISSN 2191-8120 (electronic)

ISBN 978-1-4614-6380-1 ISBN 978-1-4614-6381-8 (eBook)

DOI 10.1007/978-1-4614-6381-8

Springer New York Heidelberg Dordrecht London

Library of Congress Control Number: 2012956308

© The Author(s) 2013

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law.

The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media ( www.springer.com )

Trang 4

— Vishal M Patel

Trang 5

We thank former and current students as well as collaborators - Richard Baraniuk,Volkan Cevher, Pavan Turaga, Ashok Veeraraghavan, Aswin Sankaranarayanan,Dikpal Reddy, Amit Agrawal, Nalini Ratha, Jaishanker Pillai, Hien Van Nguyen,Sumit Shekhar, Garrett Warnell, Qiang Qiu, Ashish Shrivastava - for letting us drawupon their work, thus making this monograph possible

Research efforts summarized in this monograph were supported by the followinggrants and contracts: ARO MURI (W911NF-09-1-0383), ONR MURI (N00014-08-1-0638), ONR grant (N00014-12-1-0124), and a NIST grant (70NANB11H023)

vii

Trang 6

1 Introduction 1

1.1 Outline 2

2 Compressive Sensing 3

2.1 Sparsity 3

2.2 Incoherent Sampling 5

2.3 Recovery 6

2.3.1 Robust CS 7

2.3.2 CS Recovery Algorithms 9

2.4 Sensing Matrices 11

2.5 Phase Transition Diagrams 12

2.6 Numerical Examples 15

3 Compressive Acquisition 17

3.1 Single Pixel Camera 17

3.2 Compressive Magnetic Resonance Imaging 18

3.2.1 Image Gradient Estimation 21

3.2.2 Image Reconstruction from Gradients 23

3.2.3 Numerical Examples 24

3.3 Compressive Synthetic Aperture Radar Imaging 25

3.3.1 Slow-time Undersampling 27

3.3.2 Image Reconstruction 28

3.3.3 Numerical Examples 29

3.4 Compressive Passive Millimeter Wave Imaging 30

3.4.1 Millimeter Wave Imaging System 31

3.4.2 Accelerated Imaging with Extended Depth-of-Field 34

3.4.3 Experimental Results 36

3.5 Compressive Light Transport Sensing 37

4 Compressive Sensing for Vision 41

4.1 Compressive Target Tracking 41

4.1.1 Compressive Sensing for Background Subtraction 42

ix

Trang 7

x Contents

4.1.2 Kalman Filtered Compressive Sensing 45

4.1.3 Joint Compressive Video Coding and Analysis 45

4.1.4 Compressive Sensing for Multi-View Tracking 47

4.1.5 Compressive Particle Filtering 48

4.2 Compressive Video Processing 50

4.2.1 Compressive Sensing for High-Speed Periodic Videos 50

4.2.2 Programmable Pixel Compressive Camera for High Speed Imaging 53

4.2.3 Compressive Acquisition of Dynamic Textures 54

4.3 Shape from Gradients 56

4.3.1 Sparse Gradient Integration 57

4.3.2 Numerical Examples 59

5 Sparse Representation-based Object Recognition 63

5.1 Sparse Representation 63

5.2 Sparse Representation-based Classification 65

5.2.1 Robust Biometrics Recognition using Sparse Representation 67

5.3 Non-linear Kernel Sparse Representation 69

5.3.1 Kernel Sparse Coding 70

5.3.2 Kernel Orthogonal Matching Pursuit 72

5.3.3 Kernel Simultaneous Orthogonal Matching Pursuit 72

5.3.4 Experimental Results 74

5.4 Multimodal Multivariate Sparse Representation 75

5.4.1 Multimodal Multivariate Sparse Representation 76

5.4.2 Robust Multimodal Multivariate Sparse Representation 77

5.4.3 Experimental Results 78

5.5 Kernel Space Multimodal Recognition 80

5.5.1 Multivariate Kernel Sparse Representation 80

5.5.2 Composite Kernel Sparse Representation 81

5.5.3 Experimental Results 82

6 Dictionary Learning 85

6.1 Dictionary Learning Algorithms 85

6.2 Discriminative Dictionary Learning 86

6.3 Non-Linear Kernel Dictionary Learning 90

7 Concluding Remarks 93

References 95

Trang 8

Compressive sampling1[23, 47] is an emerging field that has attracted considerableinterest in signal/image processing, computer vision and information theory Recentadvances in compressive sensing have led to the development of imaging devicesthat sense at measurement rates below than the Nyquist rate Compressive sensingexploits the property that the sensed signal is often sparse in some transformdomain in order to recover it from a small number of linear, random, multiplexedmeasurements Robust signal recovery is possible from a number of measurementsthat is proportional to the sparsity level of the signal, as opposed to its ambientdimensionality

While there has been remarkable progress in compressive sensing for staticsignals such as images, its application to sensing temporal sequences such as videoshas also recently gained a lot of traction Compressive sensing of videos makes acompelling application towards dramatically reducing sensing costs This manifestsitself in many ways including alleviating the data deluge problems [7] faced inthe processing and storage of videos Using novel sensors based on this theory,there is hope to accomplish tasks such as target tracking and object recognitionwhile collecting significantly less data than traditional systems

In this monograph, we will present an overview of the theories of sparserepresentation and compressive sampling and examine several interesting imagingmodalities based on these theories We will also explore the use of linear andnon-linear kernel sparse representation as well as compressive sensing in manycomputer vision problems including target tracking, background subtraction andobject recognition

Writing this monograph presented a great challenge Due to page limitations, wecould not include all that we wished We beg the forgiveness of many of our fellowresearchers who have made significant contributions to the problems covered in thismonograph and whose works could not be discussed

1 Also known as compressive sensing or compressed sensing.

V.M Patel and R Chellappa, Sparse Representations and Compressive Sensing for

Imaging and Vision, SpringerBriefs in Electrical and Computer Engineering,

DOI 10.1007/978-1-4614-6381-8 1, © The Author(s) 2013

1

Trang 9

2 1 Introduction

We begin the monograph with a brief discussion on compressive sampling in Sect 2

In particular, we present some fundamental premises underlying CS: sparsity,incoherent sampling and non-linear recovery Some of the main results are alsoreviewed

In Sect 3, we describe several imaging modalities that make use of the theory

of compressive sampling In particular, we present applications in medical imaging,synthetic aperture radar imaging, millimeter wave imaging, single pixel camera andlight transport sensing

In Sect 4, we present some applications of compressive sampling in computer sion and image understanding We show how sparse representation and compressivesampling framework can be used to develop robust algorithms for target tracking

vi-We then present several applications in video compressive sampling Finally, weshow how compressive sampling can be used to develop algorithms for recoveringshapes and images from gradients

Section 5 discusses some applications of sparse representation and compressivesampling in object recognition In particular, we first present an overview of thesparse representation framework We then show how it can be used to develop robustalgorithms for object recognition Through the use of Mercer kernels, we showhow the sparse representation framework can be made non-linear We also discussmultimodal multivariate sparse representation as well as its non-linear extension atthe end of this section

In Sect 6, we discuss recent advances in dictionary learning In particular, wepresent an overview of the method of optimal directions and the KSVD algorithmsfor learning dictionaries We then show how dictionaries can be designed to achievediscrimination as well as reconstruction Finally, we highlight some of the methodsfor learning non-linear kernel dictionaries

Finally, concluding remarks are presented in Sect 7

Trang 10

Compressive Sensing

Compressive sensing [47], [23] is a new concept in signal processing andinformation theory where one measures a small number of non-adaptive linearcombinations of the signal These measurements are usually much smaller thanthe number of samples that define the signal From these small number ofmeasurements, the signal is then reconstructed by a non-linear procedure In whatfollows, we present some fundamental premises underlying CS: sparsity, incoherentsampling and non-linear recovery

2.1 Sparsity

Let x be a discrete time signal which can be viewed as an N × 1 column vector

inRN Given an orthonormal basis matrix B∈ R N ×N whose columns are the basis

elements{b i } N

i=1, x can be represented in terms of this basis as

x=∑N

or more compactly x = Bα, where α is an N × 1 column vector of coefficients.

These coefficients are given byαi = x,b i  = b T

ix where Tdenotes the transposition

operation If the basis B provides a K-sparse representation of x, then (2.1) can berewritten as

x=∑K

i=1αn ibn i ,

where{n i } are the indices of the coefficients and the basis elements corresponding

to the K nonzero entries In this case, α is an N × 1 column vector with only K

nonzero elements That is,α0= K where . pdenotes the p-norm defined as

V.M Patel and R Chellappa, Sparse Representations and Compressive Sensing for

Imaging and Vision, SpringerBriefs in Electrical and Computer Engineering,

DOI 10.1007/978-1-4614-6381-8 2, © The Author(s) 2013

3

Trang 11

Typically, real-world signals are not exactly sparse in any orthogonal basis.

Instead, they are compressible A signal is said to be compressible if the magnitude

of the coefficients, when sorted in a decreasing order, decays according to a powerlaw [87],[19] That is, when we rearrange the sequence in decreasing order ofmagnitudeα(1)α(2)≥ ··· ≥α(N), then the following holds

where|α| (n) is the nth largest entry ofα, s ≥ 1 and C is a constant For a given L,

the L-term linear combination of elements that best approximate x in an L2-sense is

obtained by keeping the L largest terms in the expansion

In other words, a small number of vectors from B can provide accurate

approximations to x This type of approximation is often known as the non-linear

approximation [87].

Fig.2.1shows an example of the non-linear approximation of the Boats imageusing Daubechies 4 wavelet The original Boats image is shown in Fig.2.1(a) Twolevel Daubechies 4 wavelet coefficients are shown in Fig.2.1(b) As can be seenfrom this figure, these coefficients are very sparse The plot of the sorted absolutevalues of the coefficients of the image is shown in Fig.2.1(c) The reconstructedimage after keeping only 10% of the coefficients with the largest magnitude isshown in Fig 2.1(d) This reconstruction provides a very good approximation

to the original image In fact, it is well known that wavelets provide the bestrepresentation for piecewise smooth images Hence, in practice wavelets are oftenused to compressively represent images

Trang 12

Fig 2.1 Compressibility of wavelets (a) Original Boats image (b) Wavelet coefficients (c) The

plot of the sorted absolute values of the coefficients (d) Reconstructed image after keeping only

10% of the coefficients with the largest magnitude

In CS, the K largestαiin (2.1) are not measured directly Instead, M

of the vector x with a collection of vectors{φj } M

j=1are measured as in y j = x,φj .

Arranging the measurement vectorφT

j as rows in an M × N matrixΦ and using(2.1), the measurement process can be written as

yxBα= Aα, (2.4)

Trang 13

6 2 Compressive Sensing

where y is an M × 1 column vector of the compressive measurements and A =ΦB

is the measurement matrix or the sensing matrix Given an M × N sensing matrix

A and the observation vector y, the general problem is to recover the sparse or

compressible vectorα To this end, the first question is to determine whether A is

good for compressive sensing Cand ´es and Tao introduced a necessary condition on

A that guarantees a stable solution for both K sparse and compressible signals [26],

[24]

Definition 2.1 A matrix A is said to satisfy the Restricted Isometry Property (RIP)

of order K with constantsδK ∈ (0,1) if

(1 −δK )v2

2≤ (1 +δK )v2

2

for any v such thatv0≤ K.

An equivalent description of RIP is to say that all subsets of K columns taken

from A are nearly orthogonal This in turn implies that K sparse vectors cannot be

in the null space of A When RIP holds, A approximately preserves the Euclidean

length of K sparse vectors That is,

(1 −δ2K )v1− v22

2≤ Av1− Av22

2≤ (1 +δ2K )v1− v22

2

holds for all K sparse vectors v1 and v 2 A related condition known as incoherence,

requires that the rows ofΦ can not sparsely represent the columns of B and vice

versa

Definition 2.2 The coherence betweenΦand the representation basis B is

1≤i, j≤N| φi ,b j  |, (2.5)whereφi ∈Φand bj ∈ B.

The numberμmeasures how much two vectors in AB can look alike The

value ofμis between 1 and

N We say that a matrix A is incoherent whenμis verysmall The incoherence holds for many pairs of bases For example, it holds for thedelta spikes and the Fourier bases Surprisingly, with high probability, incoherenceholds between any arbitrary basis and a random matrix such as Gaussian orBernoulli [6], [142]

Since, M

general has infinitely many solutions So our problem is ill-posed If one desires

to narrow the choice to a well-defined solution, additional constraints are needed

Trang 14

One approach is to find the minimum-norm solution by solving the followingoptimization problem

where Ais the adjoint of A and A= A(AA)−1is the pseudo-inverse of A This

solution, however, yields a non-sparse vector The approach taken in CS is to insteadfind the sparsest solution

The problem of finding the sparsest solution can be reformulated as finding avectorα∈ R Nwith a minimum possible number of nonzero entries That is

ˆ

α= argmin

α α 0 subject to y = Aα (2.6)

This problem can recover a K sparse signal exactly However, this is an NP-hard

problem It requires an exhaustive search of allN

in many cases of practical interest This program also approximates compressiblesignals This convex optimization program is often known as Basis Pursuit (BP)[38] The use of1 minimization for signal restoration was initially observed byengineers working in seismic exploration as early as 1970s [52] In the last fewyears, a series of papers [47], [142], [21], [25], [19], [22], explained why 1

minimization can recover sparse signals in various practical setups

Trang 15

8 2 Compressive Sensing

The problem (2.9) is often known as Basis Pursuit DeNoising (BPDN) [38] In [22],Cand ´es at el showed that the solution to (2.9) recovers an unknown sparse signalwith an error at most proportional to the noise level

Theorem 2.1 [22] Let A satisfy RIP of order 4K withδ3K+ 3δ4K < 2 Then, for

any K sparse signalα and any perturbationη with η2ε, the solution ˆα to ( 2.9 ) obeys

 ˆαα2εC K

with a well behaved constant C K

Note that for K obeying the condition of the theorem, the reconstruction from

noiseless data is exact A similar result also holds for stable recovery from imperfectmeasurements for approximately sparse signals (i.e compressible signals)

Theorem 2.2 [22] Let A satisfy RIP of order 4K Suppose thatα is an arbitrary vector in RN and letαK be the truncated vector corresponding to the K largest values ofθ in magnitude Under the hypothesis of Theorem 2.1 , the solution ˆα to ( 2.9 ) obeys

 ˆαα2εC1,K+C2,Kα− √αK 1

K with well behaved constants C1,K and C2,K.

and for signal obeying (2.3), there are fundamentally no better estimates available

This, in turn, means that with only M measurements, one can achieve an

approxima-tion error which is almost as good as that one obtains by knowing everything aboutthe signalα and selecting its K-largest elements [22].

2.3.1.1 The Dantzig selector

In (2.8), if the noise is assumed to be Gaussian with mean zero and varianceσ2,

η∼ N (0,σ2), then the stable recovery of the signal is also possible by solving a

modified optimization problem

ˆ

α = argmin

α α 1 s t.A T (y − Aα )ε (2.10)

where ε Nσ for some λN > 0 and .∞ denotes the norm For an N

dimensional vector x, it is defined as x= max(|x1|,··· ,|x N |) The above

program is known as the Dantzig Selector [28]

Trang 16

Theorem 2.3 [28] Supposeα∈ R N is any K-sparse vector obeyingδ2KK ,2K <

1 ChooseλN=2 log(N) in ( 2.10 ) Then, with large probability, the solution to ( 2.10 ), ˆα obeys

whereϑK ,2K is the K ,2K-restricted orthogonal constant defined as follows

Definition 2.3 The K ,K -restricted orthogonality constantϑK ,K ...

V.M Patel and R Chellappa, Sparse Representations and Compressive Sensing for< /small>

Imaging and Vision, SpringerBriefs in Electrical and Computer Engineering,... 23

of compressive sensing in the context of optical imaging as well as information conversion.

analog-to-3.1 Single Pixel Camera... signal propagation andthe cross-range is the direction parallel to the flight path Sometimes the rangeand the cross-range samples are referred to as the fast-time and the slow-timesamples, respectively

Ngày đăng: 05/06/2014, 12:05

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
18. Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121–167, 1998 Sách, tạp chí
Tiêu đề: Data"Mining and Knowledge Discovery
19. E. Candes and J. Romberg. Practical signal recovery from random projections. Proceedings of the SPIE, 5674:76–86, 2005 Sách, tạp chí
Tiêu đề: Practical signal recovery from random projections
Tác giả: E. Candes, J. Romberg
Nhà XB: Proceedings of the SPIE
Năm: 2005
20. E. Candes and J. Romberg. Signal recovery from random projections. in Proc. of SPIE Computational Imaging III, 5674, 2005 Sách, tạp chí
Tiêu đề: Signal recovery from random projections
Tác giả: E. Candes, J. Romberg
Nhà XB: Proc. of SPIE Computational Imaging III
Năm: 2005
21. E. Candes and J. Romberg. Quantitatively robust uncertainty principles and optimally sparse decompositions. Foundations of Comput. Math., 6(2):227–254, April 2006 Sách, tạp chí
Tiêu đề: Foundations of Comput. Math
22. E. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccu- rate measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, August 2006 Sách, tạp chí
Tiêu đề: Stable signal recovery from incomplete and inaccurate measurements
Tác giả: E. Candes, J. Romberg, T. Tao
Nhà XB: Communications on Pure and Applied Mathematics
Năm: 2006
23. E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, Feb. 2006 Sách, tạp chí
Tiêu đề: IEEE Transactions on Information Theory
24. E. Candes and T. Tao. Decoding by linear programing. IEEE Transactions on Information Theory, 51(12):4203–4215, Dec. 2005 Sách, tạp chí
Tiêu đề: IEEE Transactions on Information"Theory
25. E. Candes and T. Tao. Near optimal signal recovery from random projections: Universal en- coding strategies? IEEE Transactions on Information Theory, 52(12):5406–5425, Dec. 2006 Sách, tạp chí
Tiêu đề: Near optimal signal recovery from random projections: Universal encoding strategies
Tác giả: E. Candes, T. Tao
Nhà XB: IEEE Transactions on Information Theory
Năm: 2006
26. E. J. Candes. Compressive sampling. International Congress of Mathematics, Madrid, Spain, 3:1433–1452, 2006 Sách, tạp chí
Tiêu đề: International Congress of Mathematics, Madrid, Spain
27. E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of ACM, 58(1):1–37, 2009 Sách, tạp chí
Tiêu đề: Robust principal component analysis
Tác giả: E. J. Candes, X. Li, Y. Ma, J. Wright
Nhà XB: Journal of ACM
Năm: 2009
28. E. J. Candes and T Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of Statistics, 35(6):2313–2351, 2007 Sách, tạp chí
Tiêu đề: Annals of Statistics
29. E. J. Candes, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted 1 minimization. J.Fourier Anal. Appl., 14:877–905, 2008 Sách, tạp chí
Tiêu đề: 1minimization. "J."Fourier Anal. Appl
30. W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar:Signal Processing Algorithms. Artech House, Norwood, MA, 1995 Sách, tạp chí
Tiêu đề: Spotlight Synthetic Aperture Radar:Signal Processing Algorithms
Tác giả: W. G. Carrara, R. S. Goodman, R. M. Majewski
Nhà XB: Artech House
Năm: 1995
31. W. Thomas Cathey and Edward R. Dowski. New paradigm for imaging systems. Appl. Opt., 41(29):6080–6092, Oct. 2002 Sách, tạp chí
Tiêu đề: Appl. Opt
32. M. C á etin and W. C. Karl. Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Transactions on Image Processing, 10(4):623–631, Apr. 2001 Sách, tạp chí
Tiêu đề: Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization
Tác giả: M. C á etin, W. C. Karl
Nhà XB: IEEE Transactions on Image Processing
Năm: 2001
33. V. Cevher, A. Sankaranarayanan, M. Duarte, D. Reddy, R. Baraniuk, and R. Chellappa.Compressive sensing for background subtraction. ECCV, 2008 Sách, tạp chí
Tiêu đề: ECCV
34. A. B. Chan and N. Vasconcelos. Probabilistic kernels for the classification of auto-regressive visual processes. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 846–851, 2005 Sách, tạp chí
Tiêu đề: IEEE Conf. on Computer Vision and Pattern Recognition
35. T. F. Chan, S. Esedoglu, F. Park, and M. H. Yip. Recent Developments in Total Variation Image Restoration. Springer Verlag, 2005 Sách, tạp chí
Tiêu đề: Recent Developments in Total Variation"Image Restoration
36. Wai Lam Chan, Kriti Charan, Dharmpal Takhar, Kevin F. Kelly, Richard G. Baraniuk, and Daniel M. Mittleman. A single-pixel terahertz imaging system based on compressed sensing.Appl. Phys. Lett., 93(12):121105–3, 2008 Sách, tạp chí
Tiêu đề: A single-pixel terahertz imaging system based on compressed sensing
Tác giả: Wai Lam Chan, Kriti Charan, Dharmpal Takhar, Kevin F. Kelly, Richard G. Baraniuk, Daniel M. Mittleman
Nhà XB: Appl. Phys. Lett.
Năm: 2008
37. R. Chartrand. Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14:707–710, 2007 Sách, tạp chí
Tiêu đề: IEEE"Signal Processing Letters

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN