1. Trang chủ
  2. » Giáo án - Bài giảng

Bản chất của hình ảnh y sinh học (Phần 12)

97 182 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 97
Dung lượng 1,99 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

12.2.1 Discriminant and decision functions A general linear discriminant or decision function is of the form dx =w1x1+w2x2++wnxn+wn +1=wTx 12.1wherex= x1 x2 ::: xn 1]T is the feature vec

Trang 1

The nal purpose of biomedical image analysis is to classify a given image, orthe features that have been detected in the image, into one of a few knowncategories In medical applications, a further goal is to arrive at a diagnos-tic decision regarding the condition of the patient A physician or medicalspecialist may achieve this goal via visual analysis of the image and datapresented: comparative analysis of the given image with others of known di-agnoses or the application of established protocols and sets of rules assist insuch a decision-making process Images taken earlier of the same patient mayalso be used, when available, for comparative or dierential analysis Somemeasurements may also be made from the given image to assist in the anal-ysis The basic knowledge, clinical experience, expertise, and intuition of thephysician play signicant roles in this process.

When image analysis is performed via the application of computer rithms, the typical result is the extraction of a number of numerical features.When the numerical features relate directly to measurements of organs orfeatures represented by the image | such as an estimate of the size of theheart or the volume of a tumor | the clinical specialist may be able to usethe features directly in his or her diagnostic logic However, when parame-ters such as measures of texture and shape complexity are derived, a humananalyst is not likely to be able to analyze or comprehend the features Fur-thermore, as the number of the computed features increases, the associateddiagnostic logic may become too complicated and unwieldy for human analy-sis Computer methods would then be desirable to perform the classicationand decision process

algo-At the outset, it should be borne in mind that a biomedical image forms butone piece of information in arriving at a diagnosis: the classication of a givenimage into one of many categories may assist in the diagnostic procedure,but will almost never be the only factor Regardless, pattern classicationbased upon image analysis is indeed an important aspect of biomedical imageanalysis, and forms the theme of the present chapter Remaining within therealm of CAD as introduced in Figure 1.33 and Section 1.11, it would bepreferable to design methods so as to aid a medical specialist in arriving at adiagnosis rather than to provide a decision

Trang 2

A generic problem statement for pattern classication may be expressed asfollows: A number of measures and features have been derived from a biomed-ical image Develop methods to classify the image into one of a few speci-

ed categories Investigate the relevance of the features and the classicationmethods in arriving at a diagnostic decision about the patient

Observe that the features mentioned above may have been derived manually

or by computer methods Recognize the distinction between classifying thegiven image and arriving at a diagnosis regarding the patient: the connectionbetween the two tasks or steps may not always be direct In other words, apattern classication method may facilitate the labeling of a given image asbeing a member of a particular class arriving at a diagnosis of the condition ofthe patient will most likely require the analysis of several other items of clinicalinformation Although it is common to work with a prespecied number ofpattern classes, many problems do exist where the number of classes is notknown a priori A special case is screening, where the aim is to simply decide

on the presence or absence of a certain type of abnormality or disease Theinitial decision in screening may be further focused on whether the subjectappears to be free of the specic abnormality of concern or requires furtherinvestigation

The problem statement and description above are rather generic Severalconsiderations arise in the practical application of the concepts mentionedabove to medical images and diagnosis Using the detection of breast can-cer as an example, the following questions illustrate some of the problemsencountered in practice

Is a mass or tumor present? (Yes/No)

If a mass or tumor is present

{ Give or mark its location

{ Compare the density of the mass to that of the surrounding tissues:hypodense, isodense, hyperdense

{ Describe the shape of its boundary: round, ovoid, irregular, crolobulated, microlobulated, spiculated

ma-{ Describe its texture: homogeneous, heterogeneous, fatty

{ Describe its edge: sharp (well-circumscribed), ill-dened (fuzzy)

{ Decide if it is a benign mass, a cyst (solid or uid-lled), or amalignant tumor

Are calcications present? (Yes/No)

If calcications are present:

{ Estimate their number percm2

{ Describe their shape: round, ovoid, elongated, branching, rough,punctate, irregular, amorphous

Trang 3

Are there major changes compared to the previous mammogram of thepatient?

Is the case normal? (Yes/No)

If the case is abnormal:

{ Is the disease benign or malignant (cancer)?

The items listed above give a selection of the many features of grams that a radiologist would investigate see Ackerman et al 1084] and theBI-RADSTMmanual 403] for more details Figure 12.1shows a graphical userinterface developed by Alto et al 528, 1085] for the categorization of breastmasses related to some of the questions listed above Figure 12.2 illustratesfour segments of mammograms demonstrating masses and tumors of dier-ent characteristics, progressing from a well-circumscribed and homogeneousbenign mass to a highly spiculated and heterogeneous tumor

mammo-The subject matter of this book | image analysis and pattern tion | can provide assistance in responding to only some of the questionslisted above Even an entire set of mammograms may not lead to a naldecision: other modes of diagnostic imaging and means of investigation may

classica-be necessary to arrive at a denite diagnosis

In the following sections, a number of methods for pattern classication,decision making, and evaluation of the results of classication are reviewedand illustrated

(Note: Parts of this chapter are reproduced, with permission, from R.M.Rangayyan, Biomedical Signal Analysis: A Case-Study Approach, IEEE Pressand Wiley, New York, NY 2002, cIEEE.)

12.1 Pattern Classication

Pattern recognition or classication may be dened as the categorization ofthe input data into identiable classes via the extraction of signicant features

or attributes of the data from a background of irrelevant detail 402, 721,

1086, 1087, 1088, 1089, 1090] In biomedical image analysis, after quantitativefeatures have been extracted from the given images, each image (or ROI) may

be represented by a feature vectorx= x1 x2 ::: xn]T, which is also known

Trang 4

FIGURE 12.1

Graphical user interface for the categorization of breast masses Reproducedwith permission from H Alto, R.M Rangayyan, R.B Paranjape, J.E.L.Desautels, and H Bryant, \An indexed atlas of digital mammograms forcomputer-aided diagnosis of breast cancer", Annales des Telecommunications,58(5): 820 { 835, 2003 c GET { Lavoisier Figure courtesy of C LeGuil-lou, Ecole Nationale Superieure des Telecommunications de Bretagne, Brest,France

Trang 5

(a) b145lc95 (b) b164ro94 (c) m51rc97 (d) m55lo97

Examples of breast mass regions and contours with the corresponding values

of fractional concavity fcc, spiculation index SI, compactness cf, acutance

A, and sum entropyF8 (a) Circumscribed benign mass (b) Macrolobulatedbenign mass (c) Microlobulated malignant tumor (d) Spiculated malignanttumor Note that the masses and their contours are of widely diering size,but have been scaled to the same size in the illustration The rst letter

of the case identier indicates a malignant diagnosis with `m' and a benigndiagnosis with `b' based upon biopsy The symbols after the rst numericalportion of the identier represent l: left, r: right, c: cranio-caudal view, o:medio-lateral oblique view, x: axillary view The last two digits representthe year of acquisition of the mammogram An additional character of theidentier after the year (a { f), if present, indicates the existence of multiplemasses visible in the same mammogram Reproduced with permission from

H Alto, R.M Rangayyan, and J.E.L Desautels, \Content-based retrieval andanalysis of mammographic masses", Journal of Electronic Imaging, in press,

2005 cSPIE and IS&T

Trang 6

as the measurement vector or a pattern vector When the values xi are realnumbers,xis a point in ann-dimensional Euclidean space: vectors of similarobjects may be expected to form clusters as illustrated in Figure 12.3.

x

x

o o o

o o o o o oo

o o o

z

x x

x x x x

Two-dimensional feature vectors of two classes, C1 and C2 The prototypes

of the two classes are indicated by the vectorsz1andz2 The linear decisionfunctiond(x) shown (solid line) is the perpendicular bisector of the straightline joining the two prototypes (dashed line) Reproduced with permissionfrom R.M Rangayyan, Biomedical Signal Analysis: A Case-Study Approach,IEEE Press and Wiley, New York, NY 2002, cIEEE

For e!cient pattern classication, measurements that could lead to joint sets or clusters of feature vectors are desired This point underlines theimportance of the appropriate design of the preprocessing and feature extrac-tion procedures Features or characterizing attributes that are common to allpatterns belonging to a particular class are known as intraset or intraclassfeatures Discriminant features that represent the dierences between patternclasses are called interset or interclass features

dis-The pattern classication problem is that of generating optimal decisionboundaries or decision procedures to separate the data into pattern classesbased on the feature vectors provided Figure 12.3 illustrates a simple lineardecision function or boundary to separate 2D feature vectors into two classes

Trang 7

boundaries that separate the classes.

A given set of feature vectors of known categorization is often referred to

as a training set The availability of a training set facilitates the development

of mathematical functions that can characterize the separation between theclasses The functions may then be applied to new feature vectors of unknownclasses to classify or recognize them This approach is known as supervisedpattern classication A set of feature vectors of known categorization that

is used to evaluate a classier designed in this manner is referred to as a testset After adequate testing and conrmation of the method with satisfactoryresults, the classier may be applied to new feature vectors of unknown classes the results may then be used to arrive at diagnostic decisions The followingsubsections describe a few methods that can assist in the development ofdiscriminant and decision functions

12.2.1 Discriminant and decision functions

A general linear discriminant or decision function is of the form

d(x) =w1x1+w2x2++wnxn+wn +1=wTx (12.1)wherex= x1 x2 ::: xn 1]T is the feature vector augmented by an additionalentry equal to unity, and w = w1 w2 ::: wn wn +1]T is a correspondinglyaugmented weight vector A two-class pattern classication problem may bestated as

d(x) =wTx >0 ifx2C1

0 ifx2C2

(12.2)whereC1andC2represent the two classes The discriminant function may beinterpreted as the boundary separating the classesC1 and C2, as illustrated

in Figure 12.3

In the general case of an M-class pattern classication problem, we willneed M weight vectors and M decision functions to perform the followingdecisions:

di(x) =wTix >0 ifx2Ci

0 otherwise i= 1 2 ::: M (12.3)where wi = (wi 1 wi 2 ::: win win +1)T is the weight vector for the class Ci.Three cases arise in solving this problem 1086]:

Case 1: Each class is separable from the rest by a single decision surface:

if di(x)>0 then x2Ci: (12.4)

Trang 8

Case 2: Each class is separable from every other individual class by a distinctdecision surface, that is, the classes are pairwise separable There are

M(M ;1)=2 decision surfaces given bydij(x) =wTijx

if dij(x)>08j6=i then x2Ci: (12.5)

Note: dij(x) =;dji(x).]

with the property that

if di(x)> dj(x)8j6=i then x2Ci: (12.6)This is a special instance of Case 2 We may dene

dij(x) =di(x);dj(x) = (wi ;wj)T x=wTijx: (12.7)

If the classes are separable under Case 3, they are separable under Case

2 the converse is, in general, not true

Patterns that may be separated by linear decision functions as above aresaid to be linearly separable In other situations, an innite variety of complexdecision boundaries may be formulated by using generalized decision functionsbased upon nonlinear functions of the feature vectors as

d(x) =w1f1(x) +w2f2(x) ++wKfK(x) +wK+1 (12.8)

=K+1 X

i =1

Here,ffi(x)g i= 1 2 ::: K are real, single-valued functions ofx fK +1(x) =

1 Whereas the functions fi(x) may be nonlinear in then-dimensional space

of x, the decision function may be formulated as a linear function by ing a transformed feature vector xy = f1(x) f2(x) ::: fK(x) 1]T Then,

den-d(x) =wTxy, withw= w1 w2 :::wK wK +1]T:Once evaluated,ffi(x)gisjust a set of numerical values, and xy is simply a K-dimensional vector aug-mented by an entry equal to unity Several methods exist for the derivation

of optimal linear discriminant functions 402, 738, 674]

Figure 12.4arranged in the order of decreasing acutanceA(seeSections 2.15,

7.9.2, and 12.12) Figure 12.5shows the contours of the 57 masses arranged inthe increasing order of fractional concavityfcc (seeSection 6.4) Most of thecontours of the benign masses are seen to be smooth, whereas most of the con-tours of the malignant tumors are rough and spiculated Furthermore, most ofthe benign masses have well-dened, sharp edges and are well-circumscribed,whereas the majority of the malignant tumors possess ill-dened and fuzzyborders It is seen that the shape factor fcc facilitates the ordering of the

Trang 9

of edge sharpness as dened by Mudigonda et al 165] (seeSection 7.9.2)werecomputed for the ROIs and their contours (Note: The factorSIwas divided

by two in this example to reduce it to the range 0 1].) Figure 12.6gives a plot

of the 3D feature-vector space (fcc A F8) for the 57 masses The feature F8shows poor separation between the benign and malignant samples, whereasthe featureA demonstrates some degree of separation A scatter plot of thethree shape factors (fcc cf SI) of the 57 masses is given inFigure 12.7 Each

of the three shape factors demonstrates high discriminant capability

Figure 12.8shows a 2D plot of the shape-factor vectors fcc SI] for a ing set formed by selecting the vectors for 18 benign masses and 10 malignanttumors The prototypes for the benign and malignant classes, obtained by av-eraging the vectors over all the members of the two classes in the training set,are marked as `B' and `M', respectively, on the plot The solid straight line isthe perpendicular bisector of the line joining the two prototypes (dashed line),and represents a linear discriminant function The equation of the straightline isSI+ 0:6826fcc ;0:5251 = 0:The decision function is represented bythe following rule:

12.2.2 Distance functions

Consider M pattern classes represented by their prototype patterns z1 z2

::: zM The prototype of a class is typically computed as the average of allthe feature vectors belonging to the class Figure 12.3illustrates schematicallythe prototypesz1 andz2 of the two classes shown

Trang 10

m51rc97 0.088 b164ro94 0.085 b164rc94 0.085 b146ro96 0.084 b62lc97 0.084 b62lo97 0.080

FIGURE 12.4

ROIs of 57 breast masses, including 37 benign masses and 20 malignant tumors The

of widely diering size, but have been scaled to the same size in the illustration For

from H Alto, R.M Rangayyan, and J.E.L Desautels, \Content-based retrieval and

Trang 11

b161lo95 0.05 b62lx97 0.052 b62rc97d 0.061 b158lo95 0.063 b164rc94 0.064 b145lo95 0.074

FIGURE 12.5

Contours of 57 breast masses, including 37 benign masses and 20 malignant tumors

their contours are of widely diering size, but have been scaled to the same size

J.E.L Desautels, \Content-based retrieval and analysis of mammographic masses",

Journal of Electronic Imaging,in press, 2005 c SPIE and IS&T

Trang 12

0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8

Plot of the 3D feature-vector space (fcc A F8) for the set of 57 masses in

Figure 12.4 `o': benign masses (37) `': malignant tumors (20) duced with permission from H Alto, R.M Rangayyan, and J.E.L Desautels,

Repro-\Content-based retrieval and analysis of mammographic masses", Journal ofElectronic Imaging,in press, 2005 cSPIE and IS&T

Trang 13

Plot of the 3D feature-vector space (fcc cf SI) for the set of 57 contours in

Figure 12.5 `o': benign masses (37) `': malignant tumors (20) Figurecourtesy of H Alto

The Euclidean distance between an arbitrary pattern vectorx and theithprototype is given as

A simple relationship may be established between discriminant functionsand distance functions as follows 1086]:

Choosing the minimum of D2

i is equivalent to choosing the minimum of

Di (because all Di > 0) Furthermore, from the equation above, it follows

Trang 14

o o

o o

x

x x

x x

x x

Trang 15

o oo

o o o

x x

x

x

x x

The solid line shown is a linear decision function designed as illustrated in

Figure 12.8 Three malignant cases are misclassied by the decision functionshown

Trang 16

that choosing the minimum ofD2

i is equivalent to choosing the maximum of(xTzi ;

12.2.3 The nearest-neighbor rule

Suppose that we are provided with a set of N sample patterns fs1 s2 :::

sN gof known classication: each pattern belongs to one ofM classesfC1 C2

::: CM g, with N >> M We are then given a new feature vectorx whoseclass needs to be determined Let us compute a distance measure D(si x)between the vector x and each sample pattern Then, the nearest-neighborrule states that the vectorxis to be assigned to the class of the sample that

is the closest tox:

x2CiifD(si x) = minfD(sl x)g l= 1 2 ::: N: (12.15)

A major disadvantage of the above method is that the classication decision

is made based upon a single sample vector of known classication The nearestneighbor may happen to be an outlier that is not representative of its class

It would be more reliable to base the classication upon several samples: wemay consider a certain number k of the nearest neighbors of the sample to

be classied, and then seek a majority opinion This leads to the so-calledk-nearest-neighbor or k-NN rule: Determine theknearest neighbors ofx, anduse the majority of equal classications in this group as the classication of

x:SeeSection 12.12for the description of an application of the k-NN method

to the analysis of breast masses and tumors

12.3 Unsupervised Pattern Classication

Let us consider the situation where we are given a set of feature vectors with

no categorization or classes attached to them No prior training information

is available How may we group the vectors into multiple categories?

Trang 17

labeled as unsupervised pattern classication, and may be solved by seeking methods.

cluster-12.3.1 Cluster-seeking methods

Given a set of feature vectors, we may examine them for the formation ofinherent groups or clusters This is a simple task in the case of 2D vectors,where we may plot them, visually identify groups, and label each group with

a pattern class Allowance may have to be made to assign the same class

to multiple disjoint groups Such an approach may be used even when thenumber of classes is not known at the outset When the vectors have adimension higher than three, visual analysis will not be feasible It thenbecomes necessary to dene criteria to group the given vectors on the basis

of similarity, dissimilarity, or distance measures A few examples of suchmeasures are described below 1086]:

Manhattan or city-block distance

m is the class mean vector and C is the covariance matrix A smallvalue ofDM indicates a higher potential membership of the vectorxin

Trang 18

the class than a large value ofDM (SeeSection 12.12for the description

of an application of the Mahalanobis distance to the analysis of breastmasses and tumors.)

Normalized dot product (cosine of the angle between the vectorsxand

to the given class being considered The elements along the main diagonal

of the covariance matrix provide the variance of the individual features thatmake up the feature vector The covariance matrix represents the scatter ofthe features that belong to the given class The mean and covariance need

to be updated as more samples are added to a given class in a clusteringprocedure

When the Mahalanobis distance needs to be calculated between a samplevector and a number of classes represented by their mean and covariancematrices, a pooled covariance matrix may be used if the numbers of members

in the various classes are unequal and low 1088] If the covariance matrices

of two classes areC1andC2, and the numbers of members in the two classesareN1 andN2, the pooled covariance matrix is given by

j =1 X x2 S j

is the sample mean vector of Sj, andNj is the number of samples in Sj

A few other examples of performance indices are:

Trang 19

2 Choose a nonnegative threshold .

3 Compute the distance D21 between x2 and z1 If D21 < , assign x2

to the domain (class) of cluster centerz1 otherwise, start a new clusterwith its center asz2=x2 For the subsequent steps, let us assume that

a new cluster with centerz2 has been established

4 Compute the distancesD31andD32from the next samplex3 toz1and

z2, respectively If D31 and D32 are both greater than , start a newcluster with its center asz3=x3 otherwise, assignx3to the domain ofthe closer cluster

5 Continue to apply Steps 3 and 4 by computing and checking the distancefrom every new (unclassied) pattern vector to every established clustercenter and applying the assignment or cluster-creation rule

6 Stop when every given pattern vector has been assigned to a cluster.Observe that the procedure does not require a priori knowledge of thenumber of classes Recognize also that the procedure does not assign a real-world class to each cluster: it merely groups the given vectors into disjointclusters A subsequent step is required to label each cluster with a class related

to the actual problem Multiple clusters may relate to the same real-worldclass, and may have to be merged

A major disadvantage of the simple cluster-seeking algorithm is that theresults depend upon

the rst cluster center chosen for each domain or class,

the order in which the sample patterns are considered,

Trang 20

the value of the threshold, and

the geometrical properties or distributions of the data, that is, thefeature-vector space

similar to the previous \simple" algorithm, but rst identies the cluster gions that are the farthest apart The term \maximin" refers to the combineduse of maximum and minimum distances between the given vectors and thecenters of the clusters already formed

re-1 Letx1 be the rst cluster centerz1

2 Determine the farthest sample fromx1, and label it as cluster centerz2

3 Compute the distance from each remaining sample toz1and toz2 Forevery pair of these computations, save the minimum distance, and selectthe maximum of the minimum distances If this \maximin" distance is

an appreciable fraction of the distance between the cluster centersz1and

z2, label the corresponding sample as a new cluster centerz3 otherwisestop forming new clusters and go to Step 5

4 If a new cluster center was formed in Step 3, repeat Step 3 using a

\typical" or the average distance between the established cluster centersfor comparison

5 Assign each remaining sample to the domain of its nearest cluster center

TheK-means algorithm1086]: The preceding \simple" and \maximin"algorithms are intuitive procedures The K-means algorithm is based oniterative minimization of a performance index that is dened as the sum ofthe squared distances from all points in a cluster domain to the cluster center

1 ChooseKinitial cluster centersz1(1) z2(1) ::: zK(1) Kis the number

of clusters to be formed The choice of the cluster centers is arbitrary,and could be the rstK of the feature vectors available The index inparentheses represents the iteration number

2 At thekthiterative step, distribute the samplesfxgamong theKclusterdomains, using the relation

x2Sj(k) if kx;zj(k)k<kx;zi(k)k 8i= 1 2 ::: K i6=j (12.24)whereSj(k) denotes the set of samples whose cluster center iszj(k)

3 From the results of Step 2, compute the new cluster centerszj(k+1) j =

1 2 ::: K such that the sum of the squared distances from all points

Trang 21

simply the sample mean ofSj(k) Therefore, the new cluster center isgiven by

zj(k+ 1) = 1Nj(k)

X

x2 S j ( k )

whereNj(k) is the number of samples inSj(k) The name \K-means"

is derived from the manner in which cluster centers are sequentiallyupdated

4 If zj(k+ 1) = zj(k) for j = 1 2 ::: K the algorithm has converged:terminate the procedure otherwise go to Step 2

The behavior of theK-means algorithm is inuenced by:

the number of cluster centers specied (K),

the choice of the initial cluster centers,

the order in which the sample patterns are considered, and

the geometrical properties or distributions of the data, that is, thefeature-vector space

factorsfcc andSI of the 57 breast mass contours shown in Figure 12.5(see

Section 12.12 for details) Although the categories of the samples would beunknown in a practical situation, the samples are identied in the plots withthe + symbol for malignant tumors and the symbol for the benign masses.(The categorization represents the ground-truth or true classication of thesamples based upon biopsy.)

The plots in Figures 12.10 to 12.14 show the progression of the K-meansalgorithm from its initial state to the converged state K= 2 in this example,representing the benign and malignant categories The only prior knowledge

or assumption used is that the samples are to be split into two clusters, that is,there are two classes Figure 12.10 shows two samples selected to represent thecluster centers, marked with the diamond and asterisk symbols The straightline indicates the decision boundary, which is the perpendicular bisector ofthe straight line joining the two cluster centers The K-means algorithmconverged, in this case, at the fth iteration (that is, there was no change in the

Trang 22

cluster centers after the fth iteration) The nal decision boundary results

in the misclassication of four of the malignant samples as being benign It

is interesting to note that even though the two initial cluster centers belong

to the benign category, the algorithm has converged to a useful solution.(SeeSection 12.12.1for examples of application of other pattern classicationtechniques to the same dataset.)

12.4 Probabilistic Models and Statistical Decision

Pattern classication methods such as discriminant functions are dependentupon the set of training samples provided Their success when applied to newcases will depend upon the accuracy of the representation of the various pat-tern classes by the training samples How can we design pattern classicationtechniques that are independent of specic training samples and are optimal

in a broad sense?

Probability functions and probabilistic models may be developed to resent the occurrence and statistical attributes of classes of patterns Suchfunctions may be based upon large collections of data, historical records, ormathematical models of pattern generation In the absence of information asabove, a training step with samples of known categorization will be required

rep-to estimate the required model parameters It is common practice rep-to assume aGaussian PDF to represent the distribution of the features for each class, andestimate the required mean and variance parameters from the training sets.When PDFs are available to characterize pattern classes and their features,optimal decision functions may be designed, based upon statistical functionsand decision theory The following subsections describe a few methods in thiscategory

12.4.1 Likelihood functions and statistical decision

LetP(Ci) be the probability of occurrence of classCi i= 1 2 ::: M

Ngày đăng: 27/05/2016, 15:52