LAN Topological Design Method by Gathering Network

edges represent, respectively, the physical devices of the network and the .... on a modern pc, with the data of the example shown in section 2.6 (including the .... databases of compounds, design of new organic molecules. Circuit checking and.
248KB taille 1 téléchargements 199 vues
LAN Topological Design Method by Gathering Network Components Damien Wyart and Xavier Castellani Cédric–Iie (Cnam) Laboratory — Information Systems Analysis and Design Team 18 allée Jean Rostand – 91025 Évry cedex – France Phone: +33 1 69 36 73 92 /+33 1 69 36 73 34 E-mail: [email protected]/[email protected]

September 23, 2002

Abstract This paper proposes the core of a method for designing local area networks in a re-usability perspective, using network components stored in libraries. Considering networks from a topological point of view, this core aims at covering a graph G (modeling the topology of a given network) with smaller graphs Ci (being the topological information contained in the components). Among the many potential solutions, we would like to find the one optimizing a given criteria (for example, minimizing the total cost of the cover). This solution might then be used as a first basis to effectively build the network, using the corresponding real components. An approximation algorithm for solving this NP-hard optimization problem is presented. It is based on graph matching (a class of problems widely used in the field of pattern recognition) to locate the Ci as subgraphs of G. A greedy heuristic strategy is used to obtain a full cover by gathering these local occurrences. A software toolkit was implemented to test this algorithm, leading to promising first experimental results.

1 Introduction In this paper, we present a design method for computer networks, re-using components stored in libraries. In the current version of the method, our main target is Local Area Networks (lans). To begin with, the studied problem and its context are briefly presented, along with our main objectives.

1.1 The Problem Studied in this Paper Any computer network is designed with implementation in mind and this final phase sets strong topological constraints. That is why the network design process proposed in this paper is primarily driven by the analysis of topological structures.

1

Graphs are most often used as models for the topology of networks. Their nodes and edges represent, respectively, the physical devices of the network and the links (typically cables) between them. In this paper, our study is limited to lans. They are quite simple from a technical point of view (links are most of the time bidirectional) and not very large in size (number of terminals) and scale (distance between devices). This will allow us to consider only undirected and quite small graphs (tens of nodes). Network components are defined as typical substructures (inspired from design experience and refined over time) commonly used when designing networks. In the statement of our problem, components are modeled as graphs (topological information) with associated descriptive data. In the following, the term component will often refer to the corresponding topological graph. Given a target network topology as a graph G, the main goal of our design method is to gather network components (repetitions are allowed) in order to get as close as possible to G. As many settings of components might fulfill this goal, one (or more) requirement is added in order to reduce the solution space and get more “relevant” solutions (in a sense directly linked to the adopted requirement). In this paper, each network component will have a nominal cost; the supplementary requirement will always be the minimization of the total cost of the components used in building a topology. To summarize the previous points, these informal explanations are translated in a formal setting of our problem. Problem P (maximal cover of a graph with minimal overall cost). Let G be an undirected graph and C a set {C1 , C2 , . . . , Cn } of components, where each Ci is an undirected graph with an associated cost xi . We aim at finding a maximal cover of G by components taken from C, having a minimal overall cost. Such a cover will be called optimal from now on. Avoiding heavy mathematical notations in this introduction, the term cover is used in its intuitive meaning: mapping the nodes and edges of G with all the nodes and edges of some of the Ci . A maximal cover is in fact a “saturated” cover, to which no more Ci can be added. A maximal cover is not necessarily a total cover; in the latter, all nodes and edges have to be covered. This strong requirement is not included in our problem: the covering solutions might have links or nodes left uncovered by the process. The cost of a cover is mainly obtained by adding the three following subcosts: • the costs xi of the graphs used to cover; • the costs induced by connecting some components together; • the cost induced by implementing the uncovered remainder is also taken into account

if the cover is not total.

1.2 Main Goals of the Paper We devised an approximation algorithm to solve instances of the problem P. Its implementation is able to deal with graphs having from tens to a few hundreds of nodes.

2

The point of this paper is to present and explain briefly the two key principles (the use of subgraph isomorphism on the one hand and of a greedy heuristic on the other) used in this algorithm. To avoid exploring too many topics at the same time, our whole study is restricted to the limited context outlined in section 1.1. Several classical aspects of network design (flow optimization, congestion problems, simulation of networks, queueing structures) are not covered.

1.3 Contents In section 2, we will review the proposed graph covering algorithm — core of our network design method — from an explanatory perspective. This section also contains a simple example. In section 3, a very brief overview of the implementation issues of this algorithm will be sketched, along with some first results. Finally, a summary of our main achievements and a list of prospects about the near future of this research are given in section 4. The formal description of the covering algorithm is given as an appendix.

2 Overview of the Proposed Graph Covering Method 2.1 Network Components Libraries Network components are stored in libraries, forming groups that share common features. For example, components having a ring topology can be grouped together. At a higher level, components suited to build some specific kind of network might also form a library. Figure 1 shows a graphical view of four components taken from a simple library.

A ring (n = 4, e = 4)

A star (n = 5, e = 4) n: number of nodes e: number of edges

A binary tree (n = 7, e = 6)

A string (n = 4, e = 3)

Figure 1: Sample of a simple library of network components.

2.2 Key Principles used in the Graph Covering Method Our graph covering problem P is a combinatorial optimization problem. Formally, it is a tough variant of the weighted set packing problem, which asks for a disjoint collection

3

of weighted subsets of a larger set with maximal total weight. Weighted set packing is a classical NP-hard problem (the reference book about complexity of algorithms related to well-known problems, [Garey et al. 1979], lists it) and no efficient algorithm is known to solve it in polynomial time. As a harder variant of this problem, the covering problem P is also NP-hard. This means that examining every potential cover is prohibitive even for small graphs and unfeasible for larger graphs. It takes almost eight weeks via a brute-force approach, on a modern pc, with the data of the example shown in section 2.6 (including the string components). Set packing problems (and several weighted variants) very often occur in the field of operations research. The traditional approach used in that context, integer linear programming (often abbreviated as ilp — for a concise presentation of linear programming techniques, see [Cormen et al. 2001]), only leads to optimal solutions in affordable time when the size of the sets are very small (tens of elements). Various greedy approximation strategies were also given in the literature, and analyzed from a theoretical point of view. Among these strategies are [Chandra et al. 1999], [Hochbaum et al. 1998] and [Vazirani 2001]. Very few approximation algorithms were proposed to solve generic graph covering problems. They generally impose restrictions (size, topology) on the graphs. Reviews of these algorithms are given in [Hochbaum 1996], [Goldschmidt et al. 1996] and [Goldschmidt et al. 2001]. Our original (to our knowledge) approach to the graph covering problem P relies directly on its topological content. More precisely: network components are placed one-byone in the network, using a graph matching algorithm. A greedy heuristic gathers these local instances of the components to get a reasonable global covering solution. We now turn to reviewing these two main aspects in greater detail.

2.3 Locating Components in the Network Our network design method uses graph matching techniques to locate components in the graph modeling a network. This section starts with a survey of graph matching and its applications. 2.3.1 Overview of Graph Matching Methods

Graph matching is a class of methods used to tackle various problems originating from the field of pattern recognition. The three main approaches to graph matching are: graph isomorphism, graph-subgraph isomorphism and maximal common subgraph. The first one is concerned with determining whether two graphs are topologically isomorphic. The second aims at locating instances of a graph g as subgraphs of a larger graph G. The third asks for the largest (considering either nodes or edges) common graph embedded in two distinct graphs. The following question naturally arises: which of these approaches could be best suited to a network design method ? Two key statements hold in our context:

4

• To increase reusability, network components are simple and quite small in size. They

have, most of the time, fewer nodes than the graphs modeling whole networks and can’t be isomorphic to these graphs: they might only be included in them. The graph isomorphism approach is clearly not suited to our problem. • As tuning the components is often expensive, it is cheaper and simpler to use them

without modification, if possible. Considering these two facts, subgraph isomorphism appears to be well suited for solving our covering problem at a local level (that is: finding instances of the components in the network). Subgraph isomorphism plays an important role in various applied fields. The most important ones are summarized in table 2 on page 6. These works either do not deal directly with graph covering problems or make use of specific kinds of graphs. That is why they could not be reused as is in our context. The now classical algorithm to solve subgraph isomorphism problems, still widely used, was proposed by Ullmann in 1976. The main idea used in it consists in pruning the search tree, looking ahead at useless branches. This already make it much faster that the purely brute force approach. See [Ullmann 1976] for details on this algorithm. A pattern recognition team working in Napoli (Italy) recently (1999) devised a new algorithm they called VF (V and F are the initials of Vento and Foggia, the two main authors), having much better time and space complexity than Ullman’s algorithm. Two years later, they proposed a new version, VF2, improving the space complexity for very large graphs. It is also very important to notice that the subgraph isomorphism approach leads to NPhard problems (it has been shown by Cook in [Cook 1971]). This means that, even with clever optimizations (as the ones introduced in VF), graphs of large size can not be handled in practice. The current limit is around one thousand nodes for untyped graphs (this figure is given by Jolion in [Jolion 2002]). Table 1, made from results proved in [Cordella et al. 1999], [Cordella et al. 2001] and [Ullmann 1976], summarizes the complexities of the three classical subgraph isomorphism algorithms. The Θ notation is classically used when analysing algorithms. It means that the considered growing function is bounded both from above and below. See [Cormen et al. 2001] for more information on this kind of notation. Algorithm

Temporal Complexity

Ullmann

VF

Best case

Θ N3

Θ N2

Worst case

Θ N!N3

Spatial Complexity



 Θ N3



VF2 

Θ N3



Θ(N!N)

Θ(N!N)

 Θ N2

Θ(N)

Table 1: Complexity analysis of three common subgraph isomorphism algorithms.

5

6

Circuit checking and design

Information retrieval from databases of compounds, design of new organic molecules Low Labelled, typed, undirected, small but numerous

None

Inexact matching is sometimes used

[Barnard 1993]

Scene analysis, object locating

Low Labelled, typed, undirected, large and dense

None

Inexact matching: uses a weaker definition of isomorphism [Bunke et al. 1997], [Jolion 2002], [Messmer et al. 1998]

Main uses

Mean number of simultaneously seeked isomorphisms

Characteristics of the graphs used

Relation to covering of graphs

Other specific points

Key references

[Koch et al. 1992], [Mitchell et al. 1989]

Often used with classical substring location methods; related to chemistry

In some applications, substructures might cover a larger biological structure

Labelled, undirected, large

Depends on the precise context

Computational genetics, structural analysis of biolgical structures

Biology

Table 2: Main domains where use of subgraph isomorphism methods is significant

[Ohlrich et al. 1993]

None

Components cover a circuit

Labelled, typed, directed, very large and dense

High

Electronics

Molecular chemistry

Pattern recognition

Domain

[Kuramochi et al. 2001]

Hierarchical databases are well-suited; recent works, not very stable yet

None

Most often: directed and very large

Low

Fast location of information in databases

Data mining

VF2 will certainly become the new standard algorithm for dealing with subgraph isomorphism problems. From now on, we will implicitly assume that VF2 is used when referring to a subgraph isomorphism solving algorithm. 2.3.2 Using Subgraph Isomorphism to Locate Components

Some definitions are now introduced for explaining some details of our method. Definition 1. A graph is a pair G = (N, E) of sets satisfying E ⊆ [N]2 . Here, [N]2 denotes the set of all subsets of N having exactly two elements. In this paper, as graphs model topologies of networks, N and E will always be finite. A subgraph S of a graph G = (N, E) is a graph (NS , ES ) such that NS ⊆ N and ES = E ∩ [NS ]2 . This definition of a graph is taken from Diestel [Diestel 2000]. Concerning subgraph iosomorphim, the following definition, slightly adapted from the one proposed by Valiente [Valiente 2001] is used: Definition 2. A graph G1 = (N1 , E1 ) is isomorphic to a subgraph S2 of a graph G2 = (N2 , E2 ) if there is an injection ϕ : N1 → N2 such that, for every pair of nodes ni , nj ∈ N1 , if (ni , nj ) ∈ E1 then (ϕ(ni ), ϕ(nj )) ∈ E2 . For simplicity reasons, the term isomorphism has been chosen, while the defined relationship is more technically called monomorphism (strict isomorphism relies on a bijection). For example, there exist three distinct subgraph isomorphisms of the graph G1 to the graph G2 drawn in figure 2. They are shown in figure 3, with dotted lines denoting edges of G1 mapped to edges of G2 . 20

1

2

3

10 30

1

2

4

Figure 2: Two graphs G1 and G2 .

3

1

2

4

3

4

1

2

3

4

Figure 3: The three distinct subgraph isomorphisms of G1 to G2 .

Fully symetric mappings (for example: ϕ1 = {10 → 2, 20 → 3, 30 → 4} and ϕ?1 = {10 → 2, 20 → 4, 30 → 3}) are viewed as the same isomorphism, as they are strictly equivalent in our context (neither nodes nor edges are typed). Subgraph isomorphism is able to locate network components inside a network, but this is not enough: a cover is made of many graphs, and a method gathering the local instances (to get a full cover) is needed.

2.4 A Heuristic Strategy for Gathering Network Components 2.4.1 Designing a Pertinent Strategy

The following definition of a graph cover is proposed:

7

Definition 3. A set of graphs {C1 , . . . , Cn } covers a graph G if and only if there exists at least one isomorphism Φi per graph Ci , mapping it onto a subgraph of G. The set of all these isomorphisms is then called a cover of G. If each edge of G is mapped by at least one isomorphism, the cover is said to be total. As our graph covering problem is NP-hard, it is clearly impossible (for a graph having more than a few nodes) to go through the whole solution space in affordable time. An approximation strategy, more reasonable in time but still aiming at finding a pertinent solution (if not an optimal one) is required. Because handling the overlapping of components is crucial to cover a graph, the current state of the graph (already covered nodes and edges) has to be taken into account as we look for components in a topology. That is why components are located one by one (the search is serialized). The order of the search is in fact the main determination of the obtained covering solution. Components are looked for one-by-one; as a consequence, we have to know which match will be given by the subgraph isomorphism algorithm when several matches exist. Existing algorithm always return the “first” match (assuming that nodes and edges of graphs are numbered). This is clearly a drawback in our strategy design approach: the search space will thus be implicitely restricted. Existing algorithms have been adapted to introduce randomization of the matches: when several matches exist, the randomized algorithms return a match chosen at random. The positive aspect of this randomization is that it allows escaping from uninteresting matches originating in the lack of randomization. Not having it would always give the same solution (when all other parameters are fixed), whatever its quality. However, it has a drawback: it is not clever enough to select better solutions on its own. This will be investigated in the near future. Using subgraph isomorphism to cover graphs also raises the problem of deciding the precise way overlapping between several components is handled. The following conventions are adopted: on the one hand, sharing nodes between components is allowed. In our networking context, this means that one a machine is installed, it is not necessary to duplicate it only because it is needed by another component. On the other hand, edges are considered more crucial: in a network, missing cables are less tolerable than missing machines: installing a machine without the cables needed to connect it is equivalent, from a networking point of view, to not installing it at all. This is why sharing edges (links) between components is far less pertinent than sharing nodes (devices). Up to now, two distinct approaches to handling overlapping of edges from distinct components have been implemented: • The first one considers that edges from components used to cover must not overlap.

Once a component is placed, the corresponding edges on the network topology are marked as “covered” and can not be covered any more. In networking terms, this implies that distinct components must not share links (edges) between machines. • The second approach allows link overlapping on two conditions: that the shared

links are not too numerous (an upper limit is fixed at the beginning of the covering

8

process) and not overloaded (another given limit is used: the number of allowed components sharing a link). In both cases, some links might be left uncovered by the process: nothing gurantees that the components chosen by the user to cover a network may be gathered to get a total cover. 2.4.2 The Greedy Principle

Components are placed on the network one-by-one: the whole process is then made of several distinct steps. At each step, a decision has to be taken (either automatically or with the intervention of a user): one component to be searched for is selected. This selection might be based on various criteria. As these criteria might depend on the current matching opportunities, we have to search for each component independently in the topology before selecting it. The global order in which the components are searched is constructed step by step. This is the basic principle of a greedy approach: at each step, the “best” choice is made, hoping to finally obtain the overall “best” solution. Michalewicz and Fogel give in [Michalewicz et al. 2000] many details about greedy strategies and general heuristic design. Components might also be divided in subgroups (which may be subsets of libraries). The criteria can be refined to pick components in subgroups, e.g. by assigning priorities. At each step, a component is matched as many times as possible, because using a component several times often leads to some kind of scale savings. Two very simple criteria come to mind in our context. The first one asks for the cheapest component at each step, thus hoping to get a globally cheap covering solution. The second one asks for the biggest component at each step, thus hoping to use less components and reducing the overall cost. In fact, these two criteria do not depend on the current state of the greedy search and, when applied as is, they are strictly equivalent to a naive strategy with predetermined ordering of the components. However, the greedy approach can be based on these two main criteria (or one of them alone) and refined by using other parameters (depending on current greedy state).

2.5 Evaluation of the Covering Solutions In order to check if a particular covering solution given by the proposed algorithm is “interesting”, a way to evaluate solutions quantitatively is needed. This will also be useful to sort a set of solutions to a given covering problem. Two kinds of numerical variables can be used to evaluate a covering solution. On the one hand, various costs can be taken into account. On the other hand, several topological indices might be significant for measuring the “quality” of a solution. Among cost data are: • the total sum of the costs associated to components used in the cover;

9

• the cost induced by uncovered links and uncovered nodes (this might, for example,

be the number of such elements times a nominal cost); • the cost of connecting components sharing equipments; • the cost bound to the variety of the components involved in the solution: repeating

the same component or taking several components from one library can save money. Topological indices that might be used include: • the number of components used; • the number of edges; • the mean and maximum density of the components; • the number of topological flaws (for example, when isthmus are present in a cover,

the corresponding network will be “topologically” less secure). Our proposal for a generic quantitative evaluation of solutions is as follows. First of all, choose some relevant (depending on the context in which the algorithm is used) variables among the ones listed above. Then, make some pertinent evaluation functions by linear combinations between coherent parameters. Finally, assign priorities to these functions. This leads to a measuring scheme able to sort between several solutions returned by the algorithm and give a numerical estimate of a particular cover. All these considerations might also be used to devise fine-tuned criteria k to be used in the greedy step of the algorithm: as explained above, the objective of the greedy decision is often the same as the global objective wanted for the final solution. In the following example, a quite simple function F will be used to evaluate covering solutions. Given a cover S, we define : F(S) = k1 E1 (S) + k2 E2 (S) + k3 E3 (S). E1 (S) represents the total nominal cost for S: it is the sum of the component costs xi . E2 (S) is the number of uncovered links. E3 (S) denotes the number of components used in S. The following values will be used: k1 = 1, k2 = 1000 and k3 = 500. Uncovered edges

are considered more expensive than the amount of components used because they much significantly decrease the quality and pertinence of the solution. This evaluation function is homogenous and represents a total cost. From now on, it will be called the cost function.

2.6 Application to a Simple Example To illustrate the way our covering algorithm works and highlight some of its characteristics, its behaviour is shown on a simple example. We use the graph shown in figure 4 as input network G and the list C = {C1 , · · · , C21 } of network components, whose elements are listed in table 3. In each group of components, elements are ordered by number of nodes ; to keep the presented model simple, nominal costs have been chosen as linear in number of nodes.

10

13

12

15

14

11

10

8

9

1

7 2

6 5

3

4

Figure 4: A graph G modeling a simple network topology. Groups of Components C1 , C2 , C3

C4 , C5 , C6

C7 , · · · , C11

C12 , · · · , C16

C17 , · · · , C21

Descrption

Binary trees

Ternary trees

Stars

Rings

Strings

Number of nodes (n)

7, 13, 21

15, 40, 85

From 4 to 8

From 4 to 8

From 4 to 8

Number of edges (e)

e=n−1

e=n−1

e=n−1

e=n

e=n−1

Cost

120 × n

120 × n

150 × n

150 × n

100 × n

Table 3: Components used in the example to cover the graph shown in figure 4

Let us choose the following straightforward greedy criterions: k1 = “select the component having the maximal number of edges” and k2 = “select the component having a minimal cost”. In this example, the first approach (see section 2.2.1) to overlapping of components (i.e. components must not overlap) is used. To highlight the pertinence of our approach and insist on the influence of randomizing the graph matching process, the algorithm is run several times with the criterion k1 and the subset C \ {C17 , · · · , C21 } (all proposed components except strings, which are too simplistic to be much used in real implementations). We get differents covers (depending on the place where each Cj is located at first), among which two are represented in figure 5. The components are highlighted with dotted lines and uncovered edges are left undotted. We now turn to showing the crucial relationship between the covering solution and the greedy criterion chosen to obtain them. Running the algorithm with the criterion k1 and the sublist (C17 , . . . , C21 ) of C (the strings, only selected here for example purposes), the covering solution pictured in figure 6 is obtained. Using the criterion k2 and the subset (C18 , . . . , C23 ) of C, the algorithm gives the solution

11

shown in figure 7. Detailed evaluations of these solutions (using the cost function F defined above) are given in table 4. Covering solution S1

S2

S3

S4

Total cost of the components used (E1 )

2340

2040

800

1200

Number of uncovered edges (E2 )

0

2

8

6

Number of components used (E3 )

3

3

1

3

Total cost (E1 + 1000 × E2 + 500 × E3 )

3840

5540

9300

8700

Table 4: Evaluation of the four solutions given in the example

A brute-force search (done in 6 hours on this example) shows that solution S1 is in fact the “best” solution (it minimizes the chosen cost function F). It is obtained in a few seconds with our method. Because of the introduced randomization, other runs of the algorithm (with same input data) might lead to the weaker covering shown in figure 5 (b). But without it, the better solution S1 (figure 5 (a)) would not have been found at all (a test with randomization disabled confirms this). Given the same set of components, running the algorithm with two different criteria (which both make sense with cost-minimization in mind) leads (most of the time) to very different results. This is clearly shown in figures 6 and 7. These solutions are more expensive (using F) than the two from figure 5. We now turn to surveying some important points concerning our implementation of the graph covering algorithm.

3 Implementation Issues 3.1 The Main Software Tools Used The three common algorithms for solving subgraph isomorphism problems (Ullmann, VF and VF2) were implemented in C++ by Pasquale Foggia, a member of the team which designed VF and VF2. The corresponding software library, Vflib, is freely available at [Cordella et al.]. This library had to be adapted (change the isomorphism condition, tune some data structures) and extended (add our own data structures, generalize some design choices, introduce randomization) in order to make it suitable to solving our covering problem. To do that, the Python programming language has been chosen (more information is given at [Python]) to implement several variants of the algorithm presented here and other experimental covering algorithms (including one following a more naive approach, computing the ordering of components statically). 12

15

14 13

12 13

12

15

14

11 10 11

10

8

9 9

1

8

1

7 7 2

2

6

6 5

5 3

3

4

4

(a) Cover S1

(b) Cover S2

Figure 5: Two covers obtained after running the covering algorithm on the topology drawn on figure 1, with criterion k1 and the subset {C1 , · · · , C16 } of components.

13

12 13

12

15

14

15

14

11

10 9

11

10

8

8

9

1

1 7 7

2

2

6

6 5 5 3

3

4

4

Figure 6: A cover S3 resulting from running the covering algorithm with the criterion k1 and the subset {C17 , . . . , C21 } of components.

Figure 7: A cover S4 resulting from running the covering algorithm with the criterion k2 and the subset {C17 , . . . , C21 } of components.

13

An option allowing the user to select a component manually at each step of the greedy algorithm is also available. This allows further testing on the greedy criteria and might be useful to introduce supplementary requirements in a covering (for example, if a component has to be used in a particular network, this can be achieved easily). Another extension has been implemented: it allows assigning priorities to components to be selected. More pertinent solutions are found in the earlier greedy steps. Libraries of components might also be modified dynamically, adding tuned components if needed. This allows greater flexibility than using static (“frozen”) libraries.

3.2 First Tests and Results Two main groups of tests were made with our implementation of the covering algorithm and some of its variants. In the first one, we tried to highlight the influence of the greedy criteria chosen. Small input graphs G were used (20-30 nodes). An exhaustive search approach (to get the optimal solution in very small cases) has also been implemented. It is not convenient to use it, because a single run can take several days to complete. We noticed that, in cases where the optimal solution can be found, our algorithm finds it most of the time, with a fine-tuned criterion and after several runs (randomizations often helps). In the second group of tests, larger (100-500 nodes) graphs, generated randomly were used. A program generating random connected graphs has been written. Their characteristics depend on three parameters: the maximal number of nodes, the density of edges and a given probability of adding new nodes at each generating step. This significantly increases the running time, which can reach a few tens of minutes to cover a 200 nodes graph with edge-density 0.01 (the edge-density of a graph is the ratio of its number of edges by the number of edges of the complete graph having the same number of nodes). Size of the network < 15 nodes

< 50 nodes

< 100 nodes

< 200 nodes

< 500 nodes

Mean running time of our algorithm

< 10 s

< 30 s

< 5 min

< 30 min