1. Trang chủ
  2. » Công Nghệ Thông Tin

Mastering Algorithms with Perl phần 5 doc

74 175 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 74
Dung lượng 854,36 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

To implement this illusion, we will define an internal method called _edges differently for directed and undirected edges.break Page 294Now we are ready to return edges—and the vertices

Trang 1

Figure 8-24.

A graph and its representation in Perl

Creating Graphs, Dealing with Vertices

First we will define functions for creating graphs and adding and checking vertices We putthese into Graph::Base because later we'll see that our data structures are affected bywhether or not a graph is directed.break

Trang 2

# Adds the vertices to the graph $G, returns the graph.

# In list context returns the vertices @V of the graph $G.

# In scalar context (implicitly) returns the number of the vertices.

# Returns true if the vertex $v exists in

# the graph $G and false if it doesn't.

Testing for and Adding Edges

Next we'll see how to check for edges' existence and how to create edges and paths Before wetackle edges, we must talk about how we treat directedness in our data structures and code Wewill have a single flag per graph (D) that tellscontinue

Page 292

Trang 3

whether it is of the directed or undirected kind In addition to querying directedness, we will

also allow for changing it dynamically This requires re-blessing the graph and

rebuilding the set of edges

# directed

#

# $b = $G->directed($d)

#

# Set the directedness of the graph $G to $d or return the

# current directedness Directedness defaults to true.

only either of the edges u – v and v – u, not both.

Now we are ready to add edges (and by extension, paths):break

# add_edge

#

# $G = $G->add_edge($u, $v)

#

# Adds the edge defined by the vertices $u, $v, to the graph $G.

# Also implicitly adds the vertices Returns the graph.

#

sub add_edge {

my ($G, $u, $v) = @_;

Trang 4

$G->add_vertex($u);

Page 293 $G->add_vertex($v);

# Adds the edge defined by the vertices $u1, $v1, ,

# to the graph $G Also implicitly adds the vertices.

# Returns the graph.

# Adds the path defined by the vertices $u, $v, ,

# to the graph $G Also implicitly adds the vertices.

# Returns the graph.

Trang 5

define edges, really? The difference in our implementation is that an undirected graph will

"fake" half of its edges: it will believe it has an edge going from vertex v to vertex u, even if

there is an edge going only in the opposite direction To implement this illusion, we will define

an internal method called _edges differently for directed and undirected edges.break

Page 294Now we are ready to return edges—and the vertices at the other end of those edges: the

successor, predecessor, and neighbor vertices We will also use a couple of helper methods

because of directedness issues _successors and _predecessors (directed graphs are abit tricky here)

# _successors

#

# @s = $G->_successors($v)

#

# (INTERNAL USE ONLY, use only on directed graphs)

# Returns the successor vertices @s of the vertex

# (INTERNAL USE ONLY, use only on directed graphs)

# Returns the predecessor vertices @p of the vertex $v

Trang 6

Using _successors and _predecessors to define successors, predecessorand neighbors is easy To keep both sides of the Atlantic happy we also definebreak

use vars '*neighbours';

*neighbours = \&neighbors; # Make neighbours() to equal neighbors().

Page 295Now we can finally return edges:break

# (INTERNAL USE ONLY)

# Both vertices undefined:

# returns all the edges of the graph.

# Both vertices defined:

# returns all the edges between the vertices.

# Only 1st vertex defined:

# returns all the edges leading out of the vertex.

# Only 2nd vertex defined:

# returns all the edges leading into the vertex.

# Edges @e are returned as ($start_vertex, $end_vertex) pairs.

Trang 7

# Returns the edges between the vertices $u and $v, or if $v

# is undefined, the edges leading into or out of the vertex $u,

# or if $u is undefined, returns all the edges of the graph $G.

# In list context, returns the edges as a list of

# $start_vertex, $end_vertex pairs; in scalar context,

# returns the number of the edges.

The in_edges and out_edges are trivially implementable using _edges

Density, Degrees, and Vertex Classes

Now that we know how to return (the number of) vertices and edges, implementing density iseasy We will first define a helper method, density_limits, that computes all the

necessary limits for a graph: the actual functions can simply use that data.break

# density_limits

#

# ($sparse, $dense, $complete) = $G->density_limits

#

# Returns the density limits for the number of edges

# in the graph $G Note that reaching $complete edges

# does not really guarantee completeness because we

# can have multigraphs.

#

sub density_limits {

my $G = shift;

my $V = $G->vertices;

Trang 8

my ($sparse, $dense, $complete) = $G->density_limits;

return $complete ? $G->edges / $complete : 0;

}

and analogously, is_sparse and is_dense Because we now know how to count edgesper vertex, we can compute the various degrees: in_degree, out_degree, degree,and average_degree Because we can find out the degrees of each vertex, we can classifythem as follows:

Trang 9

return grep { $G->is_source_vertex($_) } $G->vertices;

}

Deleting Edges and Vertices

Now we are ready to delete graph edges and vertices, with delete_edge,

delete_edges, and delete_vertex As we mentioned earlier, deleting vertices isactually harder because it may require deleting some edges first (a "dangling" edge attached tofewer than two vertices is not well defined).break

# Deletes an edge defined by the vertices $u, $v from the graph $G.

# Note that the edge need not actually exist.

# Returns the graph.

# Deletes edges defined by the vertices $ul, $vl, ,

# from the graph $G.

# Note that the edges need not actually exist.

# Returns the graph.

#

sub delete_edges {

my $G = shift;

Trang 10

while (my ($u, $v) = splice(@_, 0, 2)) {

# Deletes the vertex $v and all its edges from the graph $G.

# Note that the vertex need not actually exist.

# Returns the graph.

Our implementation can set, get, and test for attributes, with set_attribute,

get_attribute, and has_attribute, respectively For example, to set the attributecolor of the vertex x to red and to get the attribute distance of the edge from P to q:

double-dash (Actually, it's an ''equals" sign.) We will implement this using the operator

overloading of Perl—and the fact that conversion into a string is an operator ("") in Perl

Anything we print() is first converted into a string or stringified.

Trang 11

We overload the " " operator in all three classes: our base class, Graph::Base, and the twoderived classes, Graph::Directed and Graph::Undirected The derived classeswill call the base class, with such parameters that differently directed edges will look right.

Also, notice how we now can define a Graph::Base method for checking exact

# (INTERNAL USE ONLY)

# Returns a string representation of the graph $G.

# The edges are represented by $connector and edges/isolated

# vertices are represented by $separator.

while (my ($u, $v) = splice(@E, 0, 2)) {

push @e, [$u, $v];

Trang 12

# Return true if the graphs $G and $H (actually, their string

# representations) are identical This means really identical:

# the graphs must have identical vertex names and identical edges

# between the vertices, and they must be similarly directed.

# (Graph isomorphism isn't enough.)

All graph algorithms depend on processing the vertices and the edges in some order This

process of walking through the graph is called graph traversal Most traversal orders are

sequential: select a vertex, selected an edge leading out of that vertex, select the vertex at the

other end of that vertex, and so on Repeat this until you run out of unvisited vertices (or edges,depending on your algorithm) If traversal runs into a dead end, you can recover:just pick anyremaining, unvisited vertex and retry

The two most common traversal orders are the depth-first order and the breadth-first order;

Trang 13

more on these shortly They can be used both for directed and undirected graphs, and they bothrun until they have visited all the vertices You can read more about depth-first and

breadth-first in Chapter 5, Searching.

In principle, one can walk the edges in any order Because of this ambiguity, there are

numerous orderings: O ( | E | !) possibilities, which grows extremely quickly In many

algorithms one can pick any edge to follow, but in some algorithms it does matter in whichorder the adjacent vertices are traversed Whatever we do, we must look out for cycles A

cycle is a sequence of edges that leads us to somewhere where we have been before (see

blinding vision, the search engines preprocess the mountains of data—by traversing and

indexing them When you then ask the search engine for camel trekking in Mongolia, it

triumphantly has the answer ready Or not.break

Page 302

Figure 8-25.

A graph traversal runs into a cycleThere are cycles in the Web: for example, between a group of friends If two people link to oneanother, that's a small cycle If Alice links to Bob, Bob to Jill, Jill to Tad, and Tad to Alice,that's a larger cycle (If everyone links to everyone else, that's a complete graph.)

Graph traversal doesn't solve many problems by itself It just defines some order in which towalk, climb, fly, or burrow through the vertices and the edges The key question is, what doyou do when you get there? The real benefit of traversal orders becomes evident when

operations are triggered by certain events during the traversal For instance, you could write a

program that triggers an operation such as storing data every time you reach a sink vertex (onenot followed by other vertices)

Depth-First Search

Trang 14

The depth-first search order (DFS) is perhaps the most commonly used graph traversal order.

It is by nature a recursive procedure In pseudocode:

depth-first ( graph G, vertex u )

mark vertex u as seen

for every unseen neighboring vertex of u called v

A graph being traversed in depth-first order, resulting in a depth-first tree

By using the traversal order as a framework, more interesting problems can be solved To

solve them, we'll want to define callback functions, triggered by events such as the following:

• Whenever a root vertex is seen

• Whenever a vertex is seen

• Whenever an edge is seen for the first time

Trang 15

• Whenever an edge is traversed

When called, the callback is passed the current context, consisting of the current vertex and

how have we traversed so far The context might also contain criteria such as the following:

• In which order the potential root vertices are visited

• Which are the potential root vertices to begin with

• In which order the successor vertices of a vertex are visited

• Which are the potential successor vertices to begin with

An example of a useful callback for graph G would be "add this edge to another graph" for the

third event, "when an edge is seen for the first time." This callbackcontinue

Page 304

would grow a depth-first forest (or when the entire graph is connected, a single depth-first

tree) As an example, this operation would be useful in finding the strongly connected

components of a graph Trees, and forests are defined in more detail in the section "Graph

Biology: Trees, Forests, DAGS, Ancestors, and Descendants" and strongly connected

components in the section "Strongly Connected Graphs." See also the section "Parents andChildren" later in this chapter

The basic user interface of the current web browsers works depth-first: you select a link andyou move to a new page You can also back up by returning to the previous page There isusually also a list of recently visited pages, which acts as a nice shortcut, but that conveniencedoesn't change the essential depth-first order of the list If you are on a page in the middle of thelist and start clicking on new links, you enter depth-first mode again

Topological Sort

Topological sort is a listing of the vertices of a graph in such an order that all the ordering

relations are respected

Topology is a branch of mathematics that is concerned with properties of point sets that areunaffected by elastic transformations.* Here, the preserved properties are the ordering

relations

More precisely: topological sort of a directed acyclic graph (a DAG) is a listing of the

vertices so that for all edges u-v, u comes before v in the listing Topological sort is often used

to solve temporal dependencies: subtasks need to be processed before the main task In such a

case the edges of the DAG point backwards in time, from the most recent task to the earliest.

For most graphs, there are several possible topological sorts: for an example, see Figure 8-27

Loose ordering like this is also known as partial ordering and the graphs describing them as

dependency graphs Cyclic graphs cannot be sorted topologically for obvious reasons: see

Figure 8-28

An example of topological sort is cleaning up the garage Before you can even start the

gargantuan task, you need to drive the car out After that, the floor needs hoovering, but before

Trang 16

that, you need to move that old sofa Which, in turn, has all your old vinyl records in cardboardboxes on top of it The windows could use washing, too, but no sense in attempting that beforedusting off the tool racks in front of them And before you notice, the sun is setting (See Figure8-29.)

The topological sort is achieved by traversing the graph in depth-first order and listing the

vertices in the order they are finished (that is, are seen for the last time,continue

* A topologist cannot tell the difference between a coffee mug and a donut because they both have one hole.

Because web pages form cycles, topologically sorting them is impossible (Ordering web

Trang 17

pages is anathema to hypertext anyway.)

Here is the code for cleaning up the garage using Perl:break

use Graph;

my $garage = Graph->new;

$garage->add_path( qw( move_car move_LPs move_sofa

Page 306 hoover_floor wash_floor ) );

$garage->add_edge( qw( junk_newspapers move_sofa ) );

$garage->add_path( qw( clean_toolracks wash_windows wash_floor ) );

my @topo = $garage->toposort;

print "garage toposorted = @topo\n";

This outputs:

garage toposorted = junk_newspapers move_car move_LPs move_sofa

hoover_floor clean_toolracks wash_windows wash_floor

Writing a book is an exercise in topological sorting: the author must be aware which concepts(in a technical book) or characters (in fiction) are mentioned in which order In fiction,

ignoring the ordering may work as a plot device: when done well, it yields mystery,

foreboding, and curiosity In technical writing, it yields confusion and frustration

Make As a Topological Sort

Many programmers are familiar with a tool called make, a utility most often used to compile programs in languages that require compilation But make is much more general: it is used to

define dependencies between files—how from one file we can produce another file Figure

8-30 shows the progress from sources to final executables as seen by make in the form of a

graph

Figure 8-30.

Trang 18

The dependency graph for producing the executable zogThis is no more and no less than a topological sort The extra power stems from the generic

nature of the make rules: instead of telling that foo.c can produce foo.o, the rules tell how any

C source code file can produce its respective object code file When you start collecting these

rules together, a dependency graph starts to form make is therefore a happy marriage of

pattern matching and graph theory.break

Page 307

The ambiguity of topological sort can actually be beneficial A parallel make (for example GNU make) can utilize the looseness because source code files normally do not depend on each other Therefore, several of them can be compiled simultaneously; in Figure 8-30, foo o,

zap.o, and zog.o could be produced simultaneously You can find out more about using make

from the book Managing Projects with make, by Andrew Oram and Steve Talbott.

Breadth-First Search

The breadth-first search order (BFS) is much less used than depth-first searching, but it has itsbenefits For example, it minimizes the number of edges in the paths produced BFS is used in

finding the biconnected components of a graph and for Edmonds-Karp flow networks, both

defined later in this chapter Figure 8-31 shows the same graph as seen in Figure 8-26, buttraversed this time in breadth-first search order

The running time of BFS is the same as for DFS: O ( | E | ) if we do not need to restart because

of unreached components, but if we do need to restart, it's O ( | V | + | E | ).

BFS is iterative (unlike DFS, which is recursive) In pseudocode it looks like:

breadth-first ( graph G, vertex u )

create a queue with u as the initial vertex

It's hard to surf the Net in BFS way: effectively, you would need to open a new browser

window for each link you follow As soon as you have opened all the links on a page, youcould then close the window of that one page Not exactly convenient

Implementing Graph Traversal

One good way to implement graph traversal is to use a state machine Given a graph and initial

configuration (such as the various callback functions), the machine switches states until all thegraph vertices have been seen and all necessary edges traversed.break

Trang 19

Page 308

Figure 8-31.

A graph being traversed in breadth-first order, resulting in a breadth-first treeFor example, the state of the traversal machine might contain the following components:

• the current vertex

• the vertices in the current tree (the active vertices)

• the root vertex of the current tree

• the order in which the vertices have been found

• the order in which the vertices have been completely explored with every edge traversed (the

finished vertices)

• the unseen vertices

The configuration of the state machine includes the following callbacks:

• current for selecting the current vertex from among the active vertices (rather different for,say, DFS and BFS) (this callback is mandatory)

• successor for each successor vertex of the current vertex

• unseen_successor for each yet unseen successor vertex of the current vertexbreak

Page 309

• seen_successor for each already seen successor vertex of the current vertex

Trang 20

• finish for finished vertices; it removes the vertex from the active vertices (this callback ismandatory)

Our encapsulation of this state machine is the class Graph::Traversal; the following sectionsshow usage examples

Implementing Depth-First Traversal

Having implemented the graph-traversing state machine, implementing depth-first traversal issimply this:

# Returns a new depth-first search object for the graph $G

# and the (optional) parameters %param.

sub { pop @{ $_[0]->{ active_list } } }, @_);

}

That's it Really The only DFS-specific parameters are the callback functions current andfinish The former returns the last vertex of the active_list—or in other words, thetop of the DFS stack The latter does away with the same vertex, by applying pop() on thestack

Topological sort is a listing of the vertices of a Topological sort is even simpler, because the

ordered list of finished vertices built by the state machine is exactly what we want:break

Trang 21

my $G = shift;

my $d = Graph::DFS->new($G);

# The postorder method runs the state machine dry by

# repeatedly asking for the finished vertices, and

# in list context the list of those vertices is returned.

$d->postorder;

}

Implementing Breadth-First Traversal

Implementing breadth-first is as easy as implementing depth-first:

# Returns a new breadth-first search object for the graph $G

# and the (optional) parameters %param.

Paths and Bridges

A path is just a sequence of connected edges leading from one vertex to another If one or more edges are repeated, the path becomes a walk If all the edges are covered, we have a tour There may be certain special paths possible in a graph: the Euler path and the Hamilton

path.break

Page 311

The Seven Bridges of Königsberg

Trang 22

The Euler path brings us back to the origins of the graph theory: the seven bridges connectingtwo banks and two islands of the river Pregel.* The place is the city of Königsberg, in thekingdom of East Prussia, and the year is 1736 (In case you are reaching for a map, neither EastPrussia nor Königsberg exist today Nowadays, 263 years later, the city is called Kaliningrad,and it belongs to Russia at the southeastern shore of the Baltic Sea.) The history of graph theorybegins.**

The puzzle: devise a walking tour that would passes over each bridge once and only once Ingraph terms, this means traversing each edge (bridge, in real-terms) exactly once Vertices (theriver banks and the islands) may be visited more than once if needed The process of

abstracting the real-world situation from a map to a graph presenting the essential elements isdepicted in Figure 8-32 Luckily for the cityfolk, Swiss mathematician Leonhard Euler lived inKönigsberg at the time *** He proved that there is no such tour

Euler proved that for an undirected connected graph (such as the bridges of Königsberg) tohave such a path, at most two of the vertex degrees If there are exactly two such vertices, thepath must begin from either one of them and end at the other More than two odd-degree

vertices ruin the path In this case, all the degrees are odd The good people of Königsberg had

to find something else to do Paths meeting the criteria are still called Euler paths today and, if all the edges are covered, Euler tours.

The Hamiltonian path of a graph is kind of a complement of the Eulerian path: one must visit

each vertex exactly once The problem may sound closely related to the Eulerian, but in fact, it

is nothing of the sort—and actually much harder Finding the Eulerian is O ( | E | ) and relates

to biconnectivity (take a look at the section ''Biconnectivity"), while finding the Hamiltonian

path is NP-hard You may have seen Hamiltonian path in puzzles: visit every room of the housebut only once: the doors are the edges

The Euler and Hamilton paths have more demanding relatives called Euler cycles and

Hamilton cycles These terms simply refer to connecting the ends of their respective paths in

Eulerian and Hamiltonian graphs If a cycle repeats edges, itcontinue

* Actually, to pick nits, there were more bridges than that But for our purposes seven bridges is

enough.

** The theory, that is: graphs themselves are much older Prince Theseus (aided by princess Ariadne

and her thread) of Greek legend did some practical graph fieldwork while stalking the Minotaur in the Labyrinth Solving mazes is solving how to get from one vertex (crossing) to another, following edges (paths).

*** Euler was one of the greatest mathematicians of all time For example, the notations e, i, f(x), and

π are all his brainchildren Some people quip that many mathematical concepts are named after the

first person following Euler to investigate them.

Page 312

Trang 23

Trees, Forests, DAGS, Ancestors, and Descendants

A tree is a connected undirected acyclic graph In other words, every pair of vertices has one single path connecting them Naturally, a tree has a root, branches, and leaves: you can see an

example of a tree in Figure 8-33 (Note that the root of the tree is at the top; in computer

science, trees grow down.) There is nothing sacred about the choice of the root vertex; anyvertex can be chosen

A leaf vertex is a vertex where the DFS traversal can proceed no deeper The branch vertices are all the other vertices Several disjunct trees make a forest For directed graphs one can

define trees, but the choice of the root vertex is more difficult: if the root vertex is chosen

poorly some vertices may be unreachable Directed trees are called directed acyclic graphs

(DAGs).break

Page 313

Trang 24

Figure 8-33.

A tree graph drawn in two different ways

An example of a tree is the Unix single-root directory tree: see Figure 8-34 Each leaf (file)can be reached via an unambiguous path of inner vertices of the tree (directories)

Figure 8-34.

A Unix filesystem treeSymbolic links confuse this a little, but not severely: they're true one-directional directed edges(no going back) while all the other links (directories) are bidirectional (undirected) becausethey all have the back edge " " The " " of the root directory is a self-loop (in Unix, that is—inMS-DOS that is an Invalid directory).continue

Page 314Several trees make a forest As we saw earlier, this might be the case when we have a directedgraph where by following the directed edges one cannot reach all the parts of the graph If the

graph is not fully connected, there might be islands, where the subgraphs need not be trees:

they can be collections of trees, individual trees, cycles, or even just individual vertices Anexample of a forest is the directory model of MS-DOS or VMS: they have several roots, such

Trang 25

as the familiar A: and C: drives See Figure 8-35.

Figure 8-35.

An MS-DOS filesystem tree

If every branch of a tree (including the root vertex) has no more than two children, we have a

binary tree Three children make a ternary tree, and so on.

In the World Wide Web, islands are formed when the intranet of a company is completelyseparated from the big and evil Internet No physical separation is necessary, though: if youcreate a set of web pages that point only to each other and let nobody know their URLs, youhave created a logical island

Parents and Children

Depth-first traversal of a tree graph can process the vertices in three basic orders:

Trang 26

Figure 8-36.

Preorder and postorder of a graph

Figure 8-37.

Preorder, inorder, and postorder of a binary tree

The opportunities presented by different orders become quite interesting if our trees are syntax

trees: see the section "Grammars" in Chapter 9, Strings Thus, the expression 2 + 3 could be

represented as a tree in which the + operation is the parent and the operands are the children;

we might use inorder traversal to print the equation but preorder traversal to actually solve it

We can think of a tree as a family tree, with parent vertices and child vertices, ancestors and

descendants: for example, see Figure 8-38 Family trees consist of several interlacing trees.

The immediate ancestors (directly connected) are predecessor vertices and the immediate descendants are successor vertices.

The directly connected vertices of a vertex are also called the neighbor vertices Sometimes (with adjacency lists, for example) just the successor vertices are called adjacent vertices,

which is a little bit confusing because the everyday meaning of "adjacent" includes both

predecessors and successors.break

Page 316

Trang 27

Figure 8-38.

Two family trees forming a single family tree

Edge and Graph Classes

The graphs and their elements—vertices and edges—can be classified along several

taxonomies Vertex classes we already saw in the section "Vertex Degree and Vertex Classes"earlier in this chapter In the following sections, we'll explore edge and graph classifications

Edge Classes

An edge class is a property of an edge that describes what part it plays as you traverse the

graph For instance, a breadth-first or depth-first search finds all nodes by traversing certainedges, but it might skip other edges The edges that are included are in one class; the excludededges are in another The existence (or nonexistence) of certain edge classes in a graph

indicates certain properties of the graph Depending on the traversal used, several possibleedge classifications can exist for one single graph

The most common edge classification method is to traverse a graph in depth-first order Thedepth-first traversal classifies edges into four classes; edges whose end vertices point toalready seen vertices are either back edges, forward edges, or cross edges:

All the other edges They connect vertices that have no direct ancestordescendant

relationship, or if the graph is directed, they may connect trees in a forest

We can classify an edge as soon as we have traversed both of its vertices: see Figure 8-39 and

Trang 28

Figure 8-40.

Figure 8-39.

Classifying the edges of a graphThe classification of each edge as a tree edge or forward edge is subject to the quirks of thetraversal order Depending on the order in which the successors of a vertex are chosen, an edgemay become classified either as a tree edge or as a forward edge rather haphazardly

Undirected graphs have only tree edges and back edges We define that neither forward edgesnor cross edges will exist for undirected graph: any edge that would by the rules of directedgraphs be either a forward edge or a cross edge is for undirected graphs a back edge For anexample of classifying the edges of an undirected graph, see Figure 8-41.break

Page 318

Trang 29

# Returns the edge classification as a list where each element

# is a triplet [$u, $v, $class], the $u, $v being the vertices

# of an edge and $class being the class.

Trang 30

Figure 8-41.

An edge classification of an undirected graph unless ( exists $T->{ vertex_finished }->{ $v } ) { $class = 'back';

# No cross nor forward edges in

# an undirected graph, by definition.

A directed graph is connected if all its vertices are reachable with one tree If a forest of trees

is required, the directed graph is not connected An undirected graph is connected if all itsvertices are reachable from any vertex See also the section "Kruskal's minimum spanningtree."

Trang 31

Even stronger connectivities are possible: triconnectivity and in general, k-connectivity A complete graph of | V | vertices is ( | V | - 1)-connected between any pair of vertices The most

basic example of a biconnected component would be three vertices connected in a triangle: anysingle one of the three vertices can disappear but the two remaining ones can still talk to each

other Big Internet routers are k-connected: there must not be no single point of failure.

A graph is biconnected (at least) if it has no articulation points An articulation point is

exactly the kind of vertex we would rather not see, the Achilles' heel, the weak link Removing

it disconnects the graph into islands: see Figure 8-42 If there's only one printer server in theoffice LAN, it's an articulation point for printing If it's malfunctioning, no print job can getthrough to the printers

Biconnectivity (or, rather, the lack of it) introduces graph bridges: edges that have an

articulation point at least at the other end Exterior vertices are vertices that are connected to

the rest of the graph by a bridge

Exterior vertices can be used to refer to external "blackbox" entities: in an organizational chart,for instance, an exterior vertex can mean that a responsibility is done by a subcontractor

outside the organization See Figure 8-42 for some of the vulnerabilities discussed so far

Back edges are essential for k-connectivity because they are alternate backup routes However,

there must be enough of them and they must reach back far enough in the graph: if they fail this,their end vertices become articulation points An articulation point may belong to more than

one biconnected component, for example, vertex f in Figure 8-42 The articulation points in this graph are (c, f, i,continue

Page 321

Figure 8-42.

A nonbiconnected graph with articulation points, bridges, and exterior vertex

k), the bridges are (c-f, f-h, i-k), and the exterior vertex is (h) The biconnected components are a-b-c-d, e-f-g, f-i-j, and k-l-m.break

Trang 32

unless exists $T->{ articulation_point }->{ $u };

# Walk back the stack marking the active DFS branch

# (below $u) as belonging to the articulation point $ap for ( my $i = 1; $i < @S; $i++ ) {

Trang 33

%ap = map { ( $vf[ $_ ], $_ ) } keys %ap;

# DFS tree roots are articulation points if and only

# if they have more than one child.

Trang 34

$Alphaville->add_path( qw( BusStation CityHall Mall BusStation ) );

$Alphaville->add_path( qw( Mall Airport ) );

my @ap = $Alphaville->articulation_points;

print "Alphaville articulation points = @ap\n";

This will output the following:

SouthHarbor BusStation OldHarbor Mall

which tells city planners that these locations should be overbuilt to be at least biconnected toavoid congestion

Strongly Connected Graphs

Directed graphs have their own forte: strongly connected graphs and strongly connected

components A strongly connected component is a set of vertices that can be reached from one

another: a cycle or several interlocked cycles You can see an example in Figure 8-44 Finding

the strongly connected components involves the transpose G T:break

strongly-connected-components ( graph G )

T = transpose of G

Page 324 walk T in depth-first order

Trang 35

F = depth first forest of T vertices in their finishing order

each tree of F is a strongly connected component

The time complexity of this is ΘΘ (| V | + | E | ).break

# (INTERNAL USE ONLY)

# Returns a graph traversal object that can be used for

# strong connection computations.

sub {

my ($T, %param) = @_;

while (my $root =

shift @{ $param{ strong_root_order } }) { return $root if exists $T->{ pool }->{ $root }; }

}

);

}

Trang 36

# Returns the strongly connected components @S of the graph $G

# as a list of anonymous lists of vertices, each anonymous list

# containing the vertices belonging to one strongly connected

# Clump together vertices having identical root vertices.

while (my ($v, $r) = each %R) { push @{ $C[$r] }, $v }

# Returns the strongly connected graph $T of the graph $G.

# The names of the strongly connected components are

# formed from their constituent vertices by concatenating

# their names by '+'-characters: "a" and "b" > "a+b".

my @C; # We're not calling the strongly_connected_components()

# method because we will need also the %R.

# Create the strongly connected components.

while (my ($v, $r) = each %R) { push @{ $C[$r] }, $v }

foreach my $c (@C) { $c = join("+", @$c) }

$C->directed( $G->directed );

my @E = $G->edges;

# Copy the edges between strongly connected components.

while (my ($u, $v) = splice(@E, 0, 2)) {

$C->add_edge( $C[ $R{ $u } ], $C[ $R{ $v } ] )

Trang 37

Minimum Spanning Trees

For a weighted undirected graph, a minimum spanning tree (MST) is a tree that spans every

vertex of the graph while simultaneously minimizing the total weight of the edges

For a given graph there may be (and usually are) several equally weighty minimal spanningtrees You may want to review Chapter 5, because finding MSTs uses many of the techniques

of traversing trees and heaps

Two well-known algorithms are available for finding minimum spanning trees: Kruskal'salgorithm and Prim's algorithm

Kruskal's Minimum Spanning Tree

The basic principle of Kruskal's minimum spanning tree is quite intuitive In pseudocode, it

looks like this:

The tricky part is the ''would not create a cycle." In undirected graphs this can be found easily

by using a special data structure called union-tree forest The union-tree forest is a derivative

graph It shadows the connectivity of the original graph in such a way that the forest divides the

Ngày đăng: 12/08/2014, 21:20

TỪ KHÓA LIÊN QUAN