Time: If there is no negative-weight cycle, the Þrst while loop iterates lg n times, and the total time isn3lg n.. But k cannot be an intermediate vertex on a shortest path from i to k,
Trang 1if m > n and no diagonal entries of L (m)are negative
then return “no negative-weight cycles”
either m = 1 or m = 2, and L (m) is the Þrst matrix with a negative entry on the
diagonal Thus, the correct value to return is m.
If m > 2, then we maintain an interval bracketed by the values low and high, such
that the correct value m∗is in the range low < m∗ ≤ high We use the following
loop invariant:
Loop invariant: At the start of each iteration of the “while d ≥ 1” loop,
1 d = 2p for some integer p≥ −1,
2 d = (high − low)/2,
3 low < m∗≤ high.
Initialization: Initially, m is an integer power of 2 and m > 2 Since d = m/4,
we have that d is an integer power of 2 and d > 1/2, so that d = 2 pfor some
integer p ≥ 0 We also have (high − low)/2 = (m − (m/2))/2 = m/4 = d Finally, L (m) has a negative entry on the diagonal and L (m/2) does not Since
low = m/2 and high = m, we have that low < m∗≤ high.
Maintenance: We use high, low, and d to denote variable values in a given
it-eration, and high, low, and d to denote the same variable values in the next
iteration Thus, we wish to show that d = 2p for some integer p ≥ −1
im-plies d = 2p
for some integer p ≥ −1, that d = (high − low)/2 implies
d= (high− low)/2, and that low < m∗≤ high implies low < m∗≤ high
Trang 2To see that d = 2p
, note that d = d/2, and so d = 2 p−1 The condition that
d ≥ 1 implies that p ≥ 0, and so p≥ −1
Within each iteration, s is set to low +d, and one of the following actions
oc-curs:
• If L (s) has any negative entries on the diagonal, then high is set to s and
d is set to d /2 Upon entering the next iteration, (high− low)/2 = (s − low)/2 = ((low +d) − low)/2 = d/2 = d Since L (s)has a negative
diagonal entry, we know that m∗ ≤ s Because high = s and low = low,
we have that low < m∗≤ high
• If L (s) has no negative entries on the diagonal, then low is set to s, and
d is set to d /2 Upon entering the next iteration, (high− low)/2 = (high−s)/2 = (high −(low +d))/2 = (high − low)/2 − d/2 = d − d/2 =
d /2 = d Since L (s) has no negative diagonal entries, we know that m∗> s.
Because low= s and high= high, we have that low < m∗≤ high
Termination: At termination, d < 1 Since d = 2 p for some integer p ≥ −1,
we must have p = −1, so that d = 1/2 By the second part of the loop invariant, if we multiply both sides by 2, we get that high − low = 2d = 1.
By the third part of the loop invariant, we know that low < m∗ ≤ high Since
high − low = 2d = 1 and m∗ > low, the only possible value for m∗ is high,
which the procedure returns
Time: If there is no negative-weight cycle, the Þrst while loop iterates (lg n)
times, and the total time is(n3lg n ).
Now suppose that there is a negative-weight cycle We claim that each time wecall EXTEND-SHORTEST-PATHS(L (low) , L (d) ), we have already computed L (low) and L (d) Initially, since low = m/2, we had already computed L (low) in the Þrst
while loop In succeeding iterations of the second while loop, the only way that low
changes is when it gets the value of s, and we have just computed L (s) As for L (d),
observe that d takes on the values m /4, m/8, m/16, , 1, and again, we computed
all of these L matrices in the Þrst while loop Thus, the claim is proven Each of
the two while loops iterates(lg m∗) times Since we have already computed the
parameters to each call of EXTEND-SHORTEST-PATHS, each iteration is dominated
by the (n3)-time call to EXTEND-SHORTEST-PATHS Thus, the total time is
(n3lg m∗).
In general, therefore, the running time is(n3lg min(n, m∗)).
Space: The slower algorithm needs to keep only three matrices at any time,
and so its space requirement is (n3) This faster algorithm needs to
main-tain (lg min(n, m∗)) matrices, and so the space requirement increases to
(n3lg min(n, m∗)).
Solution to Exercise 25.2-4
With the superscripts, the computation is d i j (k) ← min d i j (k−1) , d ik (k−1) + d kj (k−1)
If,
having dropped the superscripts, we were to compute and store d ik or dkj before
Trang 3using these values to compute d i j, we might be computing one of the following:
d i j (k) ← min d i j (k−1) , d ik (k) + d kj (k−1)
d i j (k) ← min d i j (k−1) , d ik (k−1) + d kj (k)
d i j (k) ← min d i j (k−1) , d ik (k) + d kj (k)
In any of these scenarios, we’re computing the weight of a shortest path from i to j
with all intermediate vertices in{1, 2, , k} If we use d ik (k) , rather than d ik (k−1),
in the computation, then we’re using a subpath from i to k with all intermediate
vertices in{1, 2, , k} But k cannot be an intermediate vertex on a shortest path from i to k, since otherwise there would be a cycle on this shortest path Thus,
d ik (k) = d ik (k−1) A similar argument applies to show that d kj (k) = d kj (k−1) Hence, wecan drop the superscripts in the computation
Solution to Exercise 25.2-6
Here are two ways to detect negative-weight cycles:
1 Check the main-diagonal entries of the result matrix for a negative value There
is a negative weight cycle if and only if d ii (n) < 0 for some vertex i:
• d ii (n) is a path weight from i to itself; so if it is negative, there is a path from i
to itself (i.e., a cycle), with negative weight
• If there is a negative-weight cycle, consider the one with the fewest vertices
• If it has just one vertex, then somew ii < 0, so d ii starts out negative, and
since d values are never increased, it is also negative when the algorithm
terminates
• If it has at least two vertices, let k be the highest-numbered vertex in the cycle, and let i be some other vertex in the cycle d ik (k−1) and d ki (k−1)havecorrect shortest-path weights, because they are not based on negative-
weight cycles (Neither d ik (k−1) nor d ki (k−1) can include k as an intermediate vertex, and i and k are on the negative-weight cycle with the fewest vertices.) Since i ; k ; i is a negative-weight cycle, the sum of
those two weights is negative, so d ii (k) will be set to a negative value
Since d values are never increased, it is also negative when the algorithm
terminates
In fact, it sufÞces to check whether d ii (n−1) < 0 for some vertex i Here’s why.
A negative-weight cycle containing vertex i either contains vertex n or it does not If it does not, then clearly d ii (n−1) < 0 If the negative-weight cycle contains
vertex n, then consider d nn (n−1) This value must be negative, since the cycle,
starting and ending at vertex n, does not include vertex n as an intermediate
vertex
2 Alternatively, one could just run the normal FLOYD-WARSHALLalgorithm one
extra iteration to see if any of the d values change If there are negative cycles,
then some shortest-path cost will be cheaper If there are no such cycles, then
no d values will change because the algorithm gives the correct shortest paths.
Trang 4Solution to Exercise 25.3-4
It changes shortest paths Consider the following graph V = {s, x, y, z}, and
there are 4 edges: w(s, x) = 2, w(x, y) = 2, w(s, y) = 5, and w(s, z) = −10.
So we’d add 10 to every weight to makew With w, the shortest path from s to y&
is s → x → y, with weight 4 With & w, the shortest path from s to y is s → y,
with weight 15 (The path s → x → y has weight 24.) The problem is that by just
adding the same amount to every edge, you penalize paths with more edges, even
if their weights are low
Solution to Exercise 25.3-6
In this solution, we assume that∞ − ∞ is undeÞned; in particular, it’s not 0
Let G = (V, E), where V = {s, u}, E = {(u, s)}, and w(u, s) = 0 There
is only one edge, and it enters s When we run Bellman-Ford from s, we get
h (s) = δ(s, s) = 0 and h(u) = δ(s, u) = ∞ When we reweight, we get
&
w(u, s) = 0 + ∞ − 0 = ∞ We compute &δ(u, s) = ∞, and so we compute
d us = ∞ + 0 − ∞ = 0 Since δ(u, s) = 0, we get an incorrect answer.
If the graph G is strongly connected, then we get h (v) = δ(s, v) < ∞ for all
verticesv ∈ V Thus, the triangle inequality says that h(v) ≤ h(u)+w(u, v) for all
edges(u, v) ∈ E, and so & w(u, v) = w(u, v)+h(u)−h(v) ≥ 0 Moreover, all edge
weights&w(u, v) used in Lemma 25.1 are Þnite, and so the lemma holds Therefore,
the conditions we need in order to use Johnson’s algorithm hold: that reweightingdoes not change shortest paths, and that all edge weightsw(u, v) are nonnegative.&
Again relying on G being strongly connected, we get that & δ(u, v) < ∞ for all
edges (u, v) ∈ E, which means that d uv = &δ(u, v) + h(v) − h(u) is Þnite and
correct
Solution to Problem 25-1
a Let T be the |V | × |V | matrix representing the transitive closure, such that
T [i, j] is 1 if there is a path from i to j, and 0 if not.
Initialize T (when there are no edges in G) as follows:
Trang 5• This says that the effect of adding edge(u, v) is to create a path (via the new
edge ) from every vertex that could already reach u to every vertex that could
already be reached fromv.
• Note that the procedure sets T [u , v] ← 1, because of the initial values
T [u , u] = T [v, v] = 1.
• This takes(V2) time because of the two nested loops.
b Consider inserting the edge (v n , v1) into the straight-line graph v1 → v2 →
· · · → vn, where n = |V |.
Before this edge is inserted, only n (n + 1)/2 entries in T are 1 (the entries on
and above the main diagonal) After the edge is inserted, the graph is a cycle
in which every vertex can reach every other vertex, so all n2entries in T are 1 Hence n2− (n(n + 1)/2) = (n2) = (V2) entries must be changed in T ,
so any algorithm to update the transitive closure must take(V2) time on this
graph
c The algorithm in part (a) would take (V4) time to insert all possible (V2)
edges, so we need a more efÞcient algorithm in order for any sequence of
in-sertions to take only O (V3) total time.
To improve the algorithm, notice that the loop over j is pointless when
T [i , v] = 1 That is, if there is already a path i ; v, then adding the edge (u, v)
can’t make any new vertices reachable from i The loop to set T [i , j] to 1 for j
such that there’s a pathv ; j is just setting entries that are already 1 Eliminate
this redundant processing as follows:
We show that this procedure takes O (V3) time to update the transitive closure
for any sequence of n insertions:
• There can’t be more than|V |2edges in G, so n ≤ |V |2
• Summed over n insertions, time for the Þrst two lines is O (nV ) = O(V3).
• The last three lines, which take(V ) time, are executed only O(V2) times
for n insertions To see this, notice that the last three lines are executed only when T [i , v] = 0, and in that case, the last line sets T [i, v] ← 1 Thus, the
number of 0 entries in T is reduced by at least 1 each time the last three lines
run Since there are only|V |2entries in T , these lines can run at most |V |2
times
• Hence the total running time over n insertions is O (V3).
Trang 6Maximum Flow
Chapter 26 overview
Network ßow
Use a graph to model material that ßows through conduits
Each edge represents one conduit, and has a capacity, which is an upper bound on the ßow rate= units/time
Can think of edges as pipes of different sizes But ßows don’t have to be of liquids.Book has an example where a ßow is how many trucks per day can ship hockeypucks between cities
Want to compute max rate that we can ship material from a designated source to a designated sink.
Flow networks
G = (V, E) directed.
Each edge(u, v) has a capacity c(u, v) ≥ 0.
If(u, v) ∈ E, then c(u, v) = 0.
Source vertex s, sink vertex t, assume s ; v ; t for all v ∈ V
Example:[Edges are labeled with capacities.]
Trang 7Positive ßow: A function p : V × V → R satisfying
• Capacity constraint: For all u , v ∈ V, 0 ≤ p(u, v) ≤ c(u, v),
• Flow conservation: For all u ∈ V − {s, t},
• Note that all positive ßows are≤ capacities
• Verify ßow conservation by adding up ßows at a couple of vertices
• Note that all positive ßows= 0 is legitimate
Cancellation with positive ßows
• Without loss of generality, can say positive ßow goes either from u to v or from
v to u, but not both (Because if not true, can transform by cancellation to be
true.)
• In the above example, we can “cancel” 1 unit of ßow in each direction between
x and z.
1 unit x → z
2 units z → x ⇒ 0 units x 1 unit z → x → z
• In both cases, “net ßow” is 1 unit z → x.
• Capacity constraint is still satisÞed (because ßows only decrease)
• Flow conservation is still satisÞed (ßow in and ßow out are both reduced by thesame amount)
Here’s a concept similar to positive ßow:
Net ßow: A function f : V × V → R satisfying
• Capacity constraint: For all u , v ∈ V, f (u, v) ≤ c(u, v),
• Skew symmetry: For all u , v ∈ V, f (u, v) = − f (v, u),
• Flow conservation: For all u ∈ V − {s, t} ,
Trang 8“ßow in = ßow out”
The differences between positive ßow p and net ßow f :
• p (u, v) ≥ 0,
• f satisÞes skew symmetry.
Equivalence of positive ßow and net ßow deÞnitions
DeÞne net ßow in terms of positive ßow:
• DeÞne f (u, v) = p(u, v) − p(v, u).
• Argue, given deÞnition of p, that this deÞnition of f satisÞes capacity
con-straint and ßow conservation
• Argue, given deÞnition of f , that this deÞnition of p satisÞes capacity
con-straint and ßow conservation
Trang 9[We’ll use net ßow, instead of positive ßow, for the rest of our discussion, in order
to cut the number of summations down by half From now on, we’ll just call it
“ßow” rather than “net ßow.”]
Value of ßow f = | f | =
v∈V
f (s, v) = total ßow out of source.
Consider the example below [The cancellation possible in the previous examplehas been made here Also, showing only ßows that are positive.]
Cancellation with ßow
If we have edges(u, v) and (v, u), skew symmetry makes it so that at most one of
these edges has positive ßow
Say f (u, v) = 5 If we “ship” 2 units v → u, we lower f (u, v) to 3 The 2 units
v → u cancel 2 of the u → v units.
Due to cancellation, a ßow only gives us this “net” effect We cannot reconstructactual shipments from a ßow
Trang 105 units u → v
0 unitsv → u same as 8 units u3 unitsv → u → v
We could add another 3 units of ßow u → v and another 3 units v → u,
maintain-ing ßow conservation
The ßow from u to v would remain f (u, v) = 5, and f (v, u) = −5.
Maximum-ßow problem: Given G, s, t, and c, Þnd a ßow whose value is
maxi-mum
Implicit summation notation
We work with functions, like f , that take pairs of vertices as arguments.
Extend to take sets of vertices, with interpretation of summing over all vertices inthe set
Example: If X and Y are sets of vertices,
Therefore, can express ßow conservation as f (u, V ) = 0, for all u ∈ V −{s, t}.
Notation: Omit braces in sets with implicit summations.
Example: f (s, V − s) = f (s, V ) Here, f (s, V − s) really means f (s, V −{s}).
Lemma
For any ßow f in G = (V, E):
1 For all X ⊆ V , f (X, X) = 0,
2 For all X , Y ⊆ V , f (X, Y ) = − f (Y, X),
3 For all X , Y, Z ⊆ V such that X ∩ Y = ∅,
Trang 11Example of using this lemma:
Lemma
| f | = f (V, t).
Proof First, show that f (V, V − s − t) = 0:
f (u, V ) = 0 for all u ∈ V − {s, t}
⇒ f (V − s − t, V ) = 0 (add up f (u, V ) for all u ∈ V − {s, t})
⇒ f (V, V − s − t) = 0 (by lemma, part (2)).
A cut (S, T ) of ßow network G = (V, E) is a partition of V into S and T = V − S
such that s ∈ S and t ∈ T
• Similar to cut used in minimum spanning trees, except that here the graph is
directed, and we require s ∈ S and t ∈ T
For ßow f , the net ßow across cut (S, T ) is f (S, T ).
Capacity of cut (S, T ) is c(S, T ).
A minimum cut of G is a cut whose capacity is minimum over all cuts of G.
For our example:[Leave on board.]
Trang 12Consider the cut S = {s, w, y} , T = {x, z, t}.
Note the difference between capacity and ßow
• Flow obeys skew symmetry, so f (y, x) = − f (x, y) = −1.
• Capacity does not: c (y, x) = 0, but c(x, y) = 1.
So include ßows going both ways across the cut, but capacity going only S to T Now consider the cut S = {s, w, x, y} , T = {z, t}.
Trang 13Therefore, maximum ßow≤ capacity of minimum cut.
Will see a little later that this is in fact an equality
The Ford-Fulkerson method
Residual network
Given a ßow f in network G = (V, E).
Consider a pair of vertices u , v ∈ V
How much additional ßow can we push directly from u to v?
That’s the residual capacity,
c f (u, v) = c(u, v) − f (u, v)
≥ 0 (since f (u, v) ≤ c(u, v))
Residual network: G f = (V, E f ),
E f = {(u, v) ∈ V × V : c f (u, v) > 0}
Each edge of the residual network can admit a positive ßow
For our example:
2 1
Given ßows f1and f2, the ßow sum f1+ f2is deÞned by
( f1+ f2)(u, v) = f1(u, v) + f2(u, v)
for all u , v ∈ V
Trang 14Given a ßow network G, a ßow f in G, and the residual network G f , let f
be any ßow in G f Then the ßow sum f + f is a ßow in G with value
| f + f| = | f | + | f|
[See book for proof.]
Augmenting path
A path s ; t in G f
• Admits more ßow along each edge
• Like a sequence of pipes through which we can squirt more ßow from s to t How much more ßow can we push from s to t along augmenting path p?
c f (p) = min {c f (u, v) : (u, v) is on p}
For our example, consider the augmenting path p = s, w, y, z, x, t.
Minimum residual capacity is 1
After we push 1 additional unit along p: [Continue from G left on board from
before.]
3
1 1
Observe that G f now has no augmenting path Why? No edges cross the cut
({s, w} , {x, y, z, t}) in the forward direction in G f So no path can get from s to t Claim that the ßow shown in G is a maximum ßow.
Trang 15Given ßow network G, ßow f in G, and an augmenting path p in G f , deÞne f p
as in lemma, and deÞne f: V × V → R by f= f + f p Then fis a ßow in G
with value| f| = | f | + c f (p) > | f |.
Theorem (Max-ßow min-cut theorem)
The following are equivalent:
1 f is a maximum ßow.
2 f admits no augmenting path.
3 | f | = c(S, T ) for some cut (S, T ).
Proof
(1)⇒ (2): If f admits an augmenting path p, then (by above corollary) would get
a ßow with value| f | + c f (p) > | f |, so f wasn’t a max ßow to start with.
(2)⇒ (3): Suppose f admits no augmenting path DeÞne
S = {v ∈ V : there exists a path s ; v in G f } ,
T = V − S
Must have t ∈ T ; otherwise there is an augmenting path.
Therefore,(S, T ) is a cut.
For each u ∈ S and v ∈ T , must have f (u, v) = c(u, v), since otherwise
(u, v) ∈ E f and thenv ∈ S.
Subtle difference between f [u , v] and f (u, v):
• f (u, v) is a function, deÞned on all u, v ∈ V
• f [u , v] is a value computed by algorithm.
• f [u, v] = f (u, v) where (u, v) ∈ E or (v, u) ∈ E.
• f [u , v] is undeÞned if neither (u, v) ∈ E nor (v, u) ∈ E.
Trang 16Analysis: If capacities are all integer, then each augmenting path raises | f | by ≥ 1.
If max ßow is f∗, then need≤ | f∗| iterations ⇒ time is O(E | f∗|).
[Handwaving—see book for better explanation.]
Note that this running time is not polynomial in input size It depends on | f∗|,which is not a function of|V | and |E|.
If capacities are rational, can scale them to integers
If irrational, FORD-FULKERSONmight never terminate!
Edmonds-Karp algorithm
Do FORD-FULKERSON, but compute augmenting paths by BFS of Gf
Augment-ing paths are shortest paths s ; t in G f, with all edge weights= 1
Edmonds-Karp runs in O (V E2) time.
To prove, need to look at distances to vertices in Gf
Letδ f (u, v) = shortest path distance u to v in G f, with unit edge weights
Lemma
For allv ∈ V − {s, t}, δ f (s, v) increases monotonically with each ßow
augmenta-tion
Proof Suppose there exists v ∈ V − {s, t} such that there is a ßow augmentation
that causesδ f (s, v) to decrease Will derive a contradiction.
Let f be the ßow before the Þrst augmentation that causes a shortest-path distance
to decrease, fbe the ßow afterward
Let v be a vertex with minimum δ f(s, v) whose distance was decreased by the
How can(u, v) ∈ E f and(u, v) ∈ E f?
The augmentation must increase ßowv to u.
Since Edmonds-Karp augments along shortest paths, the shortest path s to u in G f
hasv → u as its last edge.
Trang 17Edmonds-Karp performs O (V E) augmentations.
Proof Suppose p is an augmenting path and c f (u, v) = c f (p) Then call (u, v) a
critical edge in G f, and it disappears from the residual network after an
augmen-tation along p.
≥ 1 edge on any augmenting path is critical
Will show that each of the|E| edges can become critical ≤ |V | /2 − 1 times Consider u , v ∈ V such that either (u, v) ∈ E or (v, u) ∈ E or both Since aug-
menting paths are shortest paths, when(u, v) becomes critical Þrst time, δ f (s, v) =
δ f (s, u) + 1.
Augment ßow, so that(u, v) disppears from the residual network This edge cannot
reappear in the residual network until ßow from u to v decreases, which happens
only if(v, u) is on an augmenting path in G f : δ f(s, u) = δ f(s, v) + 1 ( f isßow when this occurs.)
By lemma,δ f (s, v) ≤ δ f(s, v) ⇒
δ f(s, u) = δ f(s, v) + 1
≥ δ f (s, v) + 1
= δ f (s, u) + 2
Therefore, from the time (u, v) becomes critical to the next time, distance of u
from s increases by ≥ 2 Initially, distance to u is ≥ 0, and augmenting path can’t have s, u, and t as intermediate vertices.
Therefore, until u becomes unreachable from source, its distance is ≤ |V | − 2 ⇒
u can become critical ≤ (|V | − 2)/2 = |V | /2 − 1 times.
Since O (E) pairs of vertices can have an edge between them in residual graph,
total # of critical edges is O (V E) Since each augmenting path has ≥ 1 critical
Use BFS to Þnd each augmenting path in O (E) time ⇒ O(V E2) time.
Can get better bounds
Push-relabel algorithms in Sections 26.4–26.5 give O (V3).
Can do even better
Trang 18Maximum bipartite matching
Example of a problem that can be solved by turning it into a ßow problem
G = (V, E) (undirected) is bipartite if we can partition V = L ∪ R such that all
edges in E go between L and R.
A matching is a subset of edges M ⊆ E such that for all v ∈ V , ≤ 1 edge of M
is incident onv (Vertex v is matched if an edge of M is incident on it; otherwise
unmatched).
Maximum matching: a matching of maximum cardinality (M is a maximum
matching if|M| ≥ |M| for all matchings M.)
Problem: Given a bipartite graph (with the partition), Þnd a maximum matching Application: Matching planes to routes.
• L = set of planes
• R= set of routes
• (u, v) ∈ E if plane u can ßy route v.
• Want maximum # of routes to be served by planes
Given G, deÞne ßow network G = (V, E).
Trang 19s t
Each vertex in V has ≥ 1 incident edge ⇒ |E| ≥ |V | /2.
Therefore,|E| ≤ |E| = |E| + |V | ≤ 3 |E|.
Therefore,|E| = (E).
Find a max ßow in G Book shows that it will have integer values for all(u, v).
Use edges that carry ßow of 1 in matching
Book proves this gives maximum matching
Trang 202 For all X , Y ⊆ V, f (X, Y ) = − f (Y, X),
3 For all X , Y, Z ⊆ V such that X ∩ Y = ∅,
Trang 21Solution to Exercise 26.1-6
The ßow sum f1+ f2 satisÞes skew symmetry and ßow conservation, but mightviolate the capacity constraint
We give proofs for skew symmetry and ßow conservation and an example that
shows a violation of the capacity constraint Let f (u, v) = ( f1+ f2)(u, v).
For skew symmetry:
f (u, v) = f1(u, v) + f2(u, v)
= − f1(v, u) − f2(v, u) (skew symmetry)
= −( f1(v, u) + f2(v, u))
= − f (v, u) For ßow conservation, let u ∈ V − {s, t}:
f1(s, t) = f2(s, t) = 1 Then f1 and f2 obey the capacity constraint, but
( f1+ f2)(u, v) = 2, which violates the capacity constraint.
Solution to Exercise 26.1-7
To see that the ßows form a convex set, we show that if f1and f2are ßows, then
so isα f1+ (1 − α) f2for allα such that 0 ≤ α ≤ 1.
For the capacity constraint, Þrst observe thatα ≤ 1 implies that 1 − α ≥ 0 Thus,
for any u , v ∈ V , we have
α f1(u, v) + (1 − α) f2(u, v) ≥ 0 · f1(u, v) + 0 · (1 − α) f2(u, v)
= 0 Since f1(u, v) ≤ c(u, v) and f2(u, v) ≤ c(u, v), we also have
α f1(u, v) + (1 − α) f2(u, v) ≤ αc(u, v) + (1 − α)c(u, v)
= (α + (1 − α))c(u, v)
= c(u, v) For skew symmetry, we have f1(u, v) = − f1(v, u) and f2(u, v) = − f2(v, u) for
any u , v ∈ V Thus, we have
α f1(u, v) + (1 − α) f2(u, v) = −α f1(v, u) − (1 − α) f2(v, u)
= −(α f1(v, u) + (1 − α) f2(v, u))