6 2 Methods to optimize the soft additive model 10 2.1 Augmented Lagrangian method on level sets.. Inthe first method, we adapt the augmented Lagrangian method developed in[25] to optimiz
Trang 1A Fast Method to Segment Images with
Additive Intensity Value
LAU TZE SIONG
Trang 2I would like to express my heartfelt gratitude to Assistant Professor AndyYip for accepting me as his graduate student I would also like to take thisopportunity to thank him for his help and guidance throughout this project.His advice on different aspects of life, not just academically, influenced megreatly and is much appreciated.
i
Trang 31.1 Soft Additive Model 1
1.2 Existence of solutions for the Soft Additive Model 6
2 Methods to optimize the soft additive model 10 2.1 Augmented Lagrangian method on level sets 11
2.2 Lagged Curvature Method 17
2.2.1 Formulation of Outer Iterations 17
2.2.2 Augmented Lagrangian Method on the Inner Iterations 25 3 Solutions of subproblems 31 3.1 Solution for the augmented Lagrangian method on level sets 33 3.1.1 Problem (2.3), (2.17) and (2.19) 34
3.1.2 Problem (2.4) 36
3.1.3 Problem (2.5) 36
3.1.4 Problem (2.6,2.7) 37
3.1.5 Problem (2.8,2.9) 38
Trang 43.2 Solutions for the lagged curvature method 393.2.1 Problem (2.23) 403.2.2 Problem (2.24)-(2.26) 40
iii
Trang 5We consider the problem of segmenting a pair of overlapping objects whoseintensity level in the intersection is approximately the sum of individualobjects We assume that the image domain Ω= [0, N] × [0, M] contains two
overlapping objects O1 ⊆ Ω and O2 ⊆ Ω and consider images u ∶ Ω → R such
a pair of objects {E1, E2} such that E1, E2 ⊆ Ω and {E1/E2, E2/E1, E1 ∩
E2, Ω /(E1∪ E2)} forms a partition of Ω with E1, E2 approximating the trueobjects O1,O2 The real-world applications of this model include X-ray im-ages [1], magnetic resonance angiography images [14, 7] and microscopy im-
Trang 6ages recording protein expression levels [11] which standard segmentationmodels does not work In the paper [16], the authors proposed to solve theadditive segmentation problem by looking for a segmentation {E1, E2} and
a set of constants c = (c10, c01, c11, c00) that minimize the soft additive ergy This energy contains a curvature term Applying the gradient descentmethod to the model leads to a fourth-order Euler-Lagrange equation which
en-is often difficult to solve efficiently
In this thesis, we present two methods to optimize the soft additive model Inthe first method, we adapt the augmented Lagrangian method developed in[25] to optimize the Euler’s elastica to solve the Euler-Lagrange equations Inthe second method, we formulate a new Euler-Lagrange equation by placingthe terms resulting from the curvature term in the Euler-Lagrange equation
one step behind the rest and call it the lagged Euler-Lagrange equation In
each step, we formulate a constrained convex minimization problem whoseminimizer is a solution of the lagged Euler-Lagrange equation Each of theseconstrained convex minimization problems can be solved by applying theaugmented Lagrangian method [10, 24] The subproblems arising from theaugmented Lagrangian method can be solved directly by either an explicitformula or by applying the Discrete Cosine Transform The solution of theEuler-Lagrange equation is achieved by allowing the iterative map to con-verge to a fixed point
This thesis is organized as follows We first review the soft additive modeland some of its results in Chapter 1 In Chapter 2, we give details of the
v
Trang 7adaptation of the augmented Lagrangian method to solve the soft additivemodel and also the lagged curvature method In Chapter 3, we providesolutions for the unconstrained minimization problems occurring in the al-gorithms developed The numerical results are given in Chapter 4 and thethesis is summarized in Chapter 5
Trang 84.1 Solutions for Image 1 47
4.2 Solutions for Image 2 48
4.3 Solutions for Image 3 49
4.4 Solutions for Image RHip 50
4.5 Solutions for Image Vessel 51
4.6 Solutions for Image Arm 52
4.7 Comparison of algorithms with respect to energy for Image 1 53 4.8 Comparison of algorithms with respect to energy for Image 2 53 4.9 Comparison of algorithms with respect to energy for Image 3 54 4.10 Comparison of algorithms with respect to energy for Image RHip 54 4.11 Comparison of algorithms with respect to energy for Image Vessel 55 4.12 Comparison of algorithms with respect to energy for Image Arm 55 4.13 Comparison of segmentation errors for Image 1 56
4.14 Comparison of segmentation errors for Image 2 56
4.15 Comparison of segmentation errors for Image 3 57
4.16 Comparison of segmentation errors for Image RHip 57
vii
Trang 9LIST OF FIGURES
4.17 Comparison of segmentation errors for Image Vessel 58
4.18 Comparison of segmentation errors for Image Arm 58
4.19 Segmentations for Image 1 59
4.20 Segmentations for Image image2 60
4.21 Segmentations for Image image3 61
4.22 Segmentations for Image RHip 62
4.23 Segmentations for Image Vessel 63
4.24 Segmentations for Image arm 64
Trang 10A closed plane curve is a map γ ∶ [0, 1] → R2 such that γ (0) = γ(1), dγ
dt exists
and is continuous for every t ∈ [0, 1] It is said to be regular if dγ
dt ≠ 0 for each
t ∈ [0, 1] We denote the arc length parameter by s and γ′, γ′′ denotes the
first and second derivative of γ with respect to s If the nth derivative γ (n) exists and is continuous, we say that γ is a curve of class Cn, and we write
γ ∈ Cn We denote C∞ = ∩∞
n=1Cn We also denote the curvature of a curve as
κ = γ′′.
Given a Lebesgue measurable set E⊆ R2, we denote its boundary by ∂E.
We say that a bounded open set E is of the classC∞if and only if its boundary
∂E is a closed plane curve of classC∞ A signed distance function of a set E
is a function Dist(E) ∶ Ω → R defined as Dist(E)(x) ≜ (−1) χ E (x)inf{∣x − y∣ ∶
1
Trang 11CHAPTER 1 PRELIMINARIES
y ∈ ∂E}.
A sequence of measurable sets {Ei} is said to converge to a measurable set
E if and only if χ E i → χE in L1(Ω)
Using the ideas from [5, 20, 19], we are ready to introduce the soft additivefunctional in [16] for (E1, E2, c) ∈ C∞× C∞× R4 defined as:
Fsoft(E1, E2, c) = ∑2
i∩Ω[α + βϕ(κi(z))] dH(z) (1.1)+∑1
where H is the 1-dimensional Hausdorff measure, α, β, γ > 0, E1, E2 ⊆ Ω are
of class C∞, E10= E1/E2, E01= E2/E1, E00= Ω/(E1∪ E2), E11= E1∩ E2 are
subsets of Ω, c= (c10, c01, c00, c11) is a set of constants, κi(z) is the curvature
of the curve ∂E i at point z ∈ Ω for i = 1, 2 and
We wish to note that this ϕ is not twice differentiable Due to numerical
considerations, we replace it by a smooth function:
ϕ (x) = 2
π (x tan−1(rx) − 1
2rlog(r2x2+ 1)) (1.3)
Trang 12where r > 0 is a constant In this thesis, we choose r = 1 We note that
the soft additive functional only makes sense for sets of class C∞ To work
around this difficulty, we extend Fsoft to M(Ω) × M(Ω) × R4 by
where M(Ω) denotes the collection of measurable subsets of Ω
Now we relax the soft additive functional by considering the lower
semicon-tinuous envelope of Fsoftwith respect to the topology in L1(R2)×L1(R2)×R4
We define the lower semicontinuous envelope Fsoft of Fsoft with respect to
the L1(R2) × L1(R2) × R4 topology as Fsoft ∶ P × P × R4 → R such that
Fsoft(E1, E2, c) ≜ inf{lim inf
where the convergence (E 1n , E 2n , c n) → (E1, E2, c) is with respect to the
L1(R2) × L1(R2) × R4 topology and P denotes the collection of sets of finiteperimeter such that
P ≜ {E ⊆ Ω ∶ E is borel and χE ∈ BV (Ω)}.
It should be noted that Fsoft is a well defined function as the sets of class
C∞ is a dense subset of P with respect to the L1(R2) topology[12]
The additive segmentation problem is solved by the soft additive model whichseeks a (E1, E2, c) that minimizes the soft additive functional It was also
3
Trang 13CHAPTER 1 PRELIMINARIES
demonstrated in [16] that the soft additive model provides very good ical results
numer-To represent a set E of class C∞ using a level set function, we construct a
smooth function ψ on Ω satisfying
function that represents the curve If ψ is the level set function of a region E,
the curvature on the zeroth level set is given by the function∇ ⋅∣∇ψ∣ ∇ψ For the
soft additive functional to be well defined, we require a level set function ψ
such that the term∇⋅ ∇ψ
∣∇ψ∣ is defined almost everywhere in Ω Such a level set
function always exists as the sets we are considering are of classC∞ and by
a result in [17], the signed distance functions are smooth almost everywhere.Using the idea of [5], the soft additive functional can be reformulated in terms
Trang 14of level set functions as
where ψ1, ψ2 are almost everywhere smooth functions, κ i = ∇ ⋅ (∇ψi/∣∇ψi∣)
and δ and H are the Dirac delta and the Heaviside function, respectively.
To solve the soft additive model numerically (see [16]), we regularize theHeaviside function, Dirac delta function and the modulus function using thefollowing smooth functions:
where ϵ is the regularization coefficient Thus, we minimize the regularized
soft additive functional
F ϵsoft(ψ1, ψ2, c) = ∑2
i=1∫Ω[α + βϕ(κi)]∣∇ψi∣ϵ δ ϵ(ψi) dx dy (1.6)+∑1
5
Trang 15In this section, we study the existence of solutions for the soft additive model.
First, we show that any minimizing sequence of Fsoft is relatively compact in
Trang 16Proof Let {(E 1n , E 2n , c n)}n∈N be such a minimizing sequence of Fsoft Bydeleting a finite number of terms, we may assume that sup
Since the image domain Ω is bounded, there exists a ball B (0, R) such that
E 1n , E 2n ⊆ Ω ⊆ B(0, R) for all n ∈ N Since {(E 1n , E 2n , c n)}n∈N is a
mini-mizing sequence, the perimeter of the sets E 1n and E 2n are bounded above
by Fsoft(E11, E21, c1) Thus χE 1n , χ E 2n are in BV for all n∈ N By RellichCompactness Theorem in BV[12], it follows that there exists bounded sets
E1, E2∈ P and a subsequence {E 1n k}n∈N, {E 2n k}k∈N such that E 1n k converges
to E1 and E 2n k converges to E2 in L1(R2) as k → ∞ Thus the subsequence {(E 1n k , E 2n k)}k∈N converges to (E1, E2) with respect to L1(R2) × L1(R2)
7
Trang 17CHAPTER 1 PRELIMINARIES
topology Since {cn} is a bounded sequence, {cnk} is also a bounded quence By the Heine-Borel Theorem there exists c ∈ R4 and a conver-gent subsequence {cnkj} such that {cnkj} → c Thus, the subsequence
se-{(E1nk
j
, E2nk
j
, c n kj)}j∈N converges to(E1, E2, c ) with respect to the L1(R2)×
L1(R2) × R4 topology This concludes the proof
Remark 1.2.2 For the rest of this section, we assume that the sequence
{cn}n∈N is bounded This is a reasonable assumption as it will be seen in
the later chapters that c n can be chosen to be the ‘average’ intensity for the region it represents in the image domain Ω for each n ∈ N.
Since the limit of a sequence of sets {Ei}i∈N of class C∞ may not be ofclassC∞, it is possible that the functional Fsoft has no minimizers However,
we can show that Fsoft has minimizers
Theorem 1.2.3 There exists (E1 ∗, E
of Fsoft By the previous proposition, there exists a subsequence
{(E 1n k , E 2n k , c n k)}k∈N that converges to a (E1 ∗, E
2 ∗, c∗) ∈ P × P × R4 Wereindex this subsequence by{(E 1j , E 2j , c j)}j∈N and denote the infimum by
m= inf
Trang 18Since {(E 1j , E 2j , c j)}j∈N is also a minimizing sequence of Fsoft, the followinginequality
Trang 19by attempting to solve for a fixed point of the Euler-Lagrange equations ofthe soft additive functional.
In the literature of image segmentation, there are many methods which may
be adapted to optimize the soft additive functional In [2], the authors usedthe method of graph cuts to denoise an image which involves a curvatureterm In another paper [6], the authors applied the method of convex splitting
to solve a fourth-order partial differential equation Multigrid methods [4]
Trang 20are also used to solve image segmentation models The methods mentionedabove may be adapted to optimize the soft additive functional.
sets
In this section we follow the ideas discussed in [25] and apply the augmentedLagrangian method to the level set formulation of the soft additive model.Before applying the augmented Lagrangian method, we convert the mini-mization problem (1.5) into a constrained optimization problem by introduc-
ing the new variables pi and ni for i = 1, 2 satisfying the following equations:
pi = ∇ψi , ni = ∣∇ψi∣ ∇ψi
The last constraint above can be reformulated as ni∣pi∣ = pi Following asimilar argument in [25], we split the two constraints into
pi = ∇ψi , ni = mi, pi⋅ mi = ∣pi∣, ∣mi∣ ≤ 1 for i = 1, 2.
Trang 21CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
MODEL
Using a change of variables, the problem of minimizing the functional in (1.5)
is equivalent to the following constrained minimization problem:
Trang 22Following similar ideas in [25], we define the following augmented Lagrangianfunctional:
+rm1∫Ω(∣p1∣ϵ− m1⋅ p1) dx dy + ∫Ωλm1(∣p1∣ϵ− m1⋅ p1) dx dy
+rm2∫Ω(∣p2∣ϵ− m2⋅ p2) dx dy + ∫Ωλm2(∣p2∣ϵ− m2⋅ p2) dx dy
+rp1∫Ω∣p1− ∇ψ1∣2
ϵ dx dy+ ∫Ωλp1 ⋅ (p1− ∇ψ1) dx dy +rp2∫Ω∣p2− ∇ψ2∣2
ϵ dx dy+ ∫Ωλp2 ⋅ (p2− ∇ψ2) dx dy +rn1∫Ω∣n1− m1∣2
ϵ dx dy+ ∫Ωλn1 ⋅ (n1− m1) dx dy +rn2∫Ω∣n2− m2∣2
ϵ dx dy+ ∫Ωλn2 ⋅ (n2− m2) dx dy +δR(m1) + δR(m2),
where λp1, λp2, λn1, λn2, λm1 and λm2 are Lagrange multipliers and rp1, rp2, rn1,
rn2, rm1 and rm2 are positive penalty parameters It is known that one of thesaddle points of the augmented Lagrangian functional gives a minimizer forthe constrained minimization problem (2.1) We use an iterative scheme tofind the saddle points of (2.2) We initialize the Lagrange multipliers as
Trang 23CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
MODEL
perform the outer iteration as described in the algorithm below
1 Initialize the variables: ψ0
else we update the penalty parameters of the variables whose
resid-ual ex did not decrease, by
rnewx = 1.2rx, x ∈ {mi, n i , p i}
Trang 24where the residual corresponding to a variable x is defined as
J k (ψ1, ψ2, c, p1, p2, m1, m2, n1, n2)
≜ L(ψ1, ψ2, c, p1, p2, m1, m2, n1, n2; λ kp−11 , λ kp−12 , λ km−11, λ km−12, λ kn−11 , λ kn−12 )
for a fixed k.
Trang 25CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
3 If l +1 = L or when the relative change of the variables is lesser than
a predetermined tolerance, we stop this sub-algorithm and updatethe variables:
Trang 262.2 Lagged Curvature Method
In this section, we propose another method to optimize the soft additive tional Solving the soft additive model using the gradient descent method
func-is computationally expensive as the gradient flow func-is a fourth-order partialdifferential equation However, when we allow some of the terms of Euler-Lagrange equations to be lagged, the problem becomes a lot easier Similartechniques for other related models have been proposed in [3] In fact, thenew gradient flow corresponds to a convex optimization problem with somesuitable change of variables We split the method into outer and inner iter-ations and give an outline of the details involved in this section
2.2.1 Formulation of Outer Iterations
In this subsection, we present two functionals whose Euler-Lagrange tions correspond to the Euler-Lagrange equation of the soft additive func-tional with some of the terms placed one step behind the rest
equa-For a fixed k ≥ 1 and level set functions ψ k
Trang 27CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
MODEL
and similarly
G2(ψ2; ψ k1, ψ k2, c) = ∫Ω[α + βϕ(κ k
2)]∣∇ψ2∣δ(ψ2) dx dy (2.11)+ ∫ΩR (ψ k
One way of minimizing the above functionals numerically is to evolve ψ1 and
ψ2 with respect to the gradient flow of the regularized G1 and G2 (similar tominimizing equation (1.6)) They are derived as
Trang 28with boundary condition ∂ψ i
∂n = 0 for i = 1, 2 We also call equation (2.12) and
(2.13) the lagged Euler-Lagrange equations Presumably, the steady state of
equations (2.12) and (2.13) give ψ k+1
We can check that (ψ∗
1, ψ∗2) is a fixed point of equation (2.14) too
Hence, we solve the Euler-Lagrange equations for the soft additive model bylooking for a fixed point of equation (2.14) Usually, the process for deriving
Minimizing the functionals (2.10) and (2.11) is difficult as they are
non-convex However, we can use a change of variables h k+1
1 = H(ψ1) and h k+1
2 =
H (ψ2) to obtain new minimization problems which are much easier to handle
We only provide the details for the case of h k+1
1 = H(ψ1) as the details forthe other case is similar
Trang 29CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
1 is a binary variable, the problem is non-convex It can be shown
that a global minimizer h k+1
1 for Problem (2.15) can be found by carrying
out the following convex minimization where the variable h k+1
Trang 30Before we give a proof of the above result, we require a lemma
Lemma 2.2.1 The functional minimized in Problem (2.16)
1 is [0, 1].
Trang 31CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
Putting all the computations together proves the lemma
Theorem 2.2.2 A global minimizer h k+1
1 for Problem (2.15) can be found
by solving for a minimizer h k1+1 of Problem (2.16) and setting
Trang 32for almost every µ ∈ [0, 1].
Proof Let h k1+1 be a global minimizer of Problem (2.16) Using the previous
lemma, for almost every µ ∈ [0, 1] chosen, the set Σµ = {z ∈ Ω ∶ h1
with respect to h ∶ Ω → {0, 1} Comparing this functional to the functional
in problem (2.15), we conclude that
Trang 33CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
1 Solve for the minimizer ¯h k+1
1 of the convex minimization problem
In our implementation, the distance function is computed using the Matlab
function bwdist We can use a similar method to obtain ψ k+1
Trang 34Thus, the outer iterations to solve the soft additive model is described as:
1 Initialize outer loop variables h0
Trang 35elas-CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
Trang 36To solve this constrained optimization problem, we define the augmentedLagrangian functional as follows
1 ∣2 dx dy+ ∫Ωλ p1 ⋅ (p1− ∇¯h k+1
1 ) dx dy +rs1∫Ω∣¯h k+1
1 − 1 + s2
1∣2 dx dy+ ∫Ωλ s1⋅ (¯h k+1
1 − 1 + s2
1) dx dy +rs2∫Ω∣ − ¯h k+1
2 as the zero function and
for a given function h0
1, we use the initializations p0
Trang 37CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE
2) of the augmented Lagrangian
func-tional with fixed Lagrange multipliers λ p1 = λ k−1
else we update the penalty parameters of the variables whose
resid-ual ex did not decrease, by
rnewx = 1.2rx, x∈ {p1, s1, s2}
Trang 38where the residual corresponding to a variable x is defined as
Trang 39CHAPTER 2 METHODS TO OPTIMIZE THE SOFT ADDITIVE