hoc Networks - Page perso de Julien Desfossez

entries Boolean and it represents the information about the number of nodes .... Figure 3 shows a ring-lattice regular network, in which all the nodes are ...
145KB taille 2 téléchargements 284 vues
Classification of Topological Patterns using Neural Networks: Towards the Improvement of Routing Mechanisms in Ad -hoc Networks Wajahat Ali†, Member IEE, Raul J. Mondragón† and Farrukh Alavi§, Member IEEE † Department of Electronic Engineering § Department of Computer Science Queen Mary, University of London Mile End Road, London E1 4NS, UK {wajahat.ali, r.j.mondragon}@elec.qmul.ac.uk [email protected] Abstract One of the most vibrant and active “new” fields today is that of ad-hoc networks. In recent years, a variety of new routing protocols targeted specifically for ad-hoc networks have been developed. Current routing algorithms are not adequate to tackle the increasing complexity of such networks and it is not clear that any particular algorithm or class of algorithm is the best for all scenarios, each routing protocol has definite advantages and disadvantages, and is well suited for certain situations. In this paper, we approach the problem of routing in ad-hoc network from a new angle by looking into the topological properties of a network - recent research by Lada A. Adamic et-al [8] shows that data traffic can be delivered efficiently to destination even if routing tables are not available just by exploiting some local topological knowledge of the network. We have shown using a neural network based approach that different network topological patterns can be differentiated by using their eigenvalues and adjacency matrix. We believe that it is a first step towards the improvement of the performance of ad-hoc network routing protocols, when routing tables are not available and instead of using the routing information, a protocol can use local topological information. At the end we discuss some results obtained from neural network simulation and future directions of our research.

Keywords Ad-hoc Networks, Network Topology, Ad-hoc Routing Protocols, Perceptron Neural Network, Network Spectra.

1. INTRODUCTION Wireless routing is currently an exciting area in the data communications research community. There has been a growing general interest in infrastructure-less or “ad-hoc” wireless networks recently as evidenced by such activities as the MANET (Mobile Ad-hoc Network) working group within the Internet Engineering Task Force (IETF), whose charter is to addres s IP routing in ad-hoc networks [1].

1. Wajahat Ali is the corresponding author. 2. Manuscript submitted 9 May 2003.

ISBN: 1-9025-6009-4 © 2003 PGNet

A "mobile Ad-hoc network" (MANET) is an autonomous system of mobile routers (and associated hosts) connected by wireless links, the union of which form an arbitrary graph. The routers are free to move randomly and organize themselves arbitrarily; thus the network's wireless topology may change rapidly and unpredictably. Such a network may operate in a standalone fashion, or may be connected to the larger Internet [2]. Ad-hoc networks are suited for use in situations where infrastructure is either not available, not trusted, or should not be relied on in times of emergency. A few examples include: military soldiers in the field; sensors scattered throughout a city for biological detection; an infrastructure-less network of notebook computers in a conference or campus setting; the forestry or lumber industry; rare animal tracking; space exploration; undersea operations; and temporary offices such as campaign headquarters [3]. The field of ad-hoc network is growing and challenging and there are still many challenges that are required to be met. One of the biggest challenge is routing – how to relay data packets from one node to other and how to do it in efficient and robust way. A number of ad-hoc network routing protocols (Reactive, Proactive) [4] with various design choices have been proposed like Temporally-Ordered Routing Algorithm (TORA) [5], Destination-Sequence Distance Vector (DSDV), Ad-hoc OnDemand Distance Vector (AODV), Wireless Routing Protocol (WRP) and Dynamic Source Routing (DSR) etc. Current routing algorithms are not adequate to tackle the increasing complexity of such networks and it is not clear that any particular algorithm or class of algorithm is the best for all scenarios, each protocol has definite advantages and disadvantages, and is well suited for certain situations [6].

1.1 Exploitation of Topological Information in Fast Changing Networks In fast changing networks where it is not possible to collect routing data from whole network because new nodes and links can appear or disappear at different time scales, the routing tables can be out of date regularly. This could mean traffic loss. It is possible to deliver traffic even without the knowledge of the routing table, for example by flooding [7]. Usually this technique uses a disproportional amount of network resources just to keep the

network traffic flowing, and it could lead to the situation where the majority of the network traffic is resource management traffic. Recent research by Lada A. Adamic et-al [8] shows that data traffic can be delivered efficiently to destination even if routing tables are not available just by exploiting some local topological knowledge of the network (in this case the probability distribution of the links). According to Adamic’s research, each node maintains the link information up to its second level neighbors in the network and instead of passing the query to large fraction of network; queries are passed only to highest degree node to search the destination. Research results show that by passing the queries to highest degree node in the neighborhood, search cost scale sublinearly with size of graph. In this paper, we approach the problem of routing by looking into the topological properties of interconnection network. We believe that it is the first step to improve the performance of an ad-hoc network, when the routing information is lost or obsolete and instead of propagating the information update of whole routing table, which is time consuming, use the local topological information for routing. To do so we need to differentiate in general terms topological properties like alternate paths, local connectivity and node degree. Also different topologies have different local number of alternate paths, local connectivity and node degree. This could mean different strategies for routing. We use neural network to differentiate different network topologies. We present the representation of network by using the adjacency matrix and its eigenvalues respectively in section 2. Section 3 reviews the Perceptron neural network as a method for classification of different network topologies and also present our results obtained from neural network simulation. Section 4 contains some discussions and conclusion.

2. METHODOLOGY In mathematical terms, a network is represented as a graph. A graph G with N nodes can be represented by its adjacency matrix A(G) with N x N elements Aij, whose value is Aij = Aji = 1 if nodes i and j are connected, and 0 otherwise. Given a graph G = (V, E) with vertices V and edges E, the adjacency matrix of G is defined as follows:

1, if ∃e : e(i, j ) ∈ E;  Ai , j =  0, otherwise. 

(1)

So adjacency matrix is a symmetric square matrix with all its entries Boolean and it represents the information about the number of nodes (vertices) and the links (connections) between the nodes (vertices) in a graph (network) [9].

2.1 Representation Adjacency Matrix

of

Network

by

using

Adjacency matrix could be used to represent topological information of the network, but there are problems. The problem

is that a same network topology can be represented in many matrix forms. 2

1

3

3

4

4

1

2

Graph A

Graph B

1

2

3

4

1

2

3

4

1

0

1

0

1

1

0

1

1

0

2

1

0

1

1

2

1

0

1

1

3

0

1

0

1

3

1

1

0

1

4

1

1

1

00

4

0

1

1

00

Adjacency Matrix of Graph A

Adjacency Matrix of Graph B

Figure 1: Representation of Network by Adjacency Matrix For example the above figure1 represents two graphs with same type of topology (same number of nodes, links and same placement of links) but the order of the labels of the nodes are different. When we convert these graphs into their adjacency matrix, we obtain two different forms of adjacency matrix. One matrix represents graph A and other represents graph B. The order of the entries of these matrices doesn’t match with each other, although both of the above graphs have same topology as shown in figure 1. In order to solve the above problem, we require an alternate method that doesn’t depend on the order of entries of the matrix. It is known that certain graph properties/invariants/parameters are closely related to its eigenvalues [10]. So we focus on the study of graph theoretical properties of interconnection networks from a new angle by exploring the spectra (eigenvalues) of interconnection networks.

2.2 Representation of Network Eigenvalues of the Adjacency Matrix

by

using

Eigenvalues of the adjacency matrix of a graph can reveal certain properties of the graph since they are closely related to some of its combinatorial invariants [10]. Eigenvalues are obtained by calculating the spectrum of an adjacency matrix and do not depend on order of the entries of a matrix. [11] So these can be used to represent the topology of graph. If we consider again the graphs of figure 1 and calculate the spectrum (eigenvalues) of their adjacency matrix, we get the 4 eigenvalues for each graph A and graph B. Also both of these graphs have same eigenvalues as shown in the following table 1 and table 2 respectively. Although the order of the labels of the nodes and the order of entries of the adjacency matrices of two graphs are different as shown in figure 1.

If we set the value of k = 3 in the above equation, we can count the number of triangles (paths of order 3) in the graph. i.e.

Graph A

λ1 λ2 λ3

-1.5616

ρ (3) =

-1 0

1 N

∑ (λ ) N

j =1

j

3

≈ Number of triangles.

(4)

Table 1: Eigenvalues representation of Graph A

Similarly by setting any integer value for k in the above equation, which is greater than 2, we can count the number of paths of k order in the graph. These paths can contain nodes which were already visited.

Graph B

3. CLASSIFICATION OF TOPOLOGICAL PATTERNS USING PERCEPTRON NEURAL NETWORK

λ4

λ1 λ2 λ3 λ4

2.5616

Neural-Networks have been applied for solving the wide variety of problems such as storing and recalling data or patterns, classifying patterns, performing general mappings from input patterns to output patterns, grouping similar patterns, or finding solutions to constrained optimization problems etc [14].

-1.5616 -1 0 2.5616

Table 2: Eigenvalues representation of Graph B From the above example it is clear that eigenvalues are better way to represent the topology of graph. So we use eigenvalues for classification of different topologi cal patterns in neural network simulation. For a real symmetric matrix, all its eigenvalues are real numbers. Given that the adjacency matrix of a graph is symmetric, eigenvalues and eigenvectors calculation can be performed using the Jacobi transformation [12].

One of the simplest neural-network is a single-layer Perceptron network whose weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the Perceptron learning rule. Perceptrons are especially suited for simple problems in pattern classification [15]. In order to classify different network topologies, we generated the following three categories of topologies [13] in the form of an adjacency matrix and then calculated their eigenvalues. §

Regular Network

2.3 Spectra of Graph (Network)

§

Random Network

The spectra of a network can be described as the relationships between the algebraic properties of the spectra of certain matrices associated with the graph and the topological properties of that graph. Essentially, key structural information is being extracted from a graph. The study of graph spectra is based on the eigenvalues and eigenvectors of the graph’s adjacency matrix. Eigenvalues and eigenvectors give a measure of dimensionality of a matrix. The spectrum of graph G is the non-decreasing set of eigenvalues together with their corresponding multiplicities of its adjacency matrix A(G). A graph G with N nodes has N eigenvalues

§

Scale-free Network

λ1, λ2 , λ3 , Lλ j

and it is useful to define its spectral density as

ρ (λ ) =

1 N ∑ δ (λ − λ j ). N j =1

In regular network topologies, all the nodes are uniformly connected with same number of links. To understand the regular network topological graph, the simplest way to represent is a lattice. A lattice is composed of set of vertices V and edges E, in such a way that each edge from E connects two vertices v 1 and v 2 from vertex set V with connection probability p. Figure 3 shows a ring-lattice regular network, in which all the nodes are uniformly connected to its 4 closest neighbors. In regular network topologies, normally very long shortest path exist because of uniform connectivity.

( 2)

Which approaches a continuous function, if N → ∞ . The interest in spectral properties is related to the fact that the spectral density can be directly related to the graph’s topological features, since its kth moment can be written as

ρ (κ ) =

1 N 1 (λ j ) κ = ∑ ∑ Ai i Ai i ...Aik i1 N j= 1 N i1 ,i2 ,...1 k 1 2 2 3

(3)

i.e. the number of paths returning to the same node in the graph. Note that these paths can contain nodes which were already visited [13].

Figure 3: Regular Network

If we make all the connections random in the above generated regular lattice, the network turns into a random network as shown in figure 4. We can define any random network as a graph in which the edges are distributed randomly. The graph of a random network is assumed to have a complex topology , without any known organizing principles . Hence it appears random as shown in figure 4. The shortest path for random graph is usually very small.

end of the training process. It means that Perceptron neural networks successfully classify all the topological patterns to their respective category (regular, random and scale-free). Figure 6 shows the total squared error for the classification of different network topological patterns using their adjacency matrices. It also shows the total squared error is equal to zero at the end of training process, which means perfect classification.

Figure 4: Random Network The scale-free (SF) network topology has two major ingredients, growth and preferential attachment . It means that network grows by the addition of new nodes and new nodes prefer to attach to the nodes that are already well connected. This leads to the power-law degree distribution of links per node as shown in figure 5.

Figure 6: Total S quared Error for the Classification of Topological Patterns using Adjacency Matrix Next, the eigenvalues of the adjacency matrix of network topological patterns were used in the learning phase of Perceptron neural network. Again the total squared error is equal to zero at the end of learning process – implying a perfect classification as shown in figure 7.

Figure 5: Scale-free Network In scale-free network topologies, nodes which are very highly connected can be used to find the shortest path in the graph. A scale-free network was used to model the internet topology .

3.1 Results The following graphs show the accuracy of Perceptron neural network for classification of different network topological patterns during the training phase. The graphs show the performance of neural networks based on sum of squared errors. The formula to calculate the error during each epoch of training is given as,

Error = ∑ (δ − δ i ) 2 n

(5)

Figure 7: Total S quared Error for the Classification of Topological Patterns using Eigenvalues of Adjacency Matrix

is the original-response

From figures 6, 7, it can be seen that the sum of squared error suddenly drops down after first epoch of training. It means that the Perceptron neural network classifies patterns very quickly. The learning process of the Perceptron neural network is faster

i =1

Where

δ

is the targe ted-response and th

δi

from the i input training pattern. The graphs show that the total squared error for all the topological patterns is equal to zero at the

with the adjacency matrix as the input as compared to the eigenvalues. Figure 8 shows the total squared error for only different regular network topological patterns. The graph shows the duration of training required for classification. It shows that the Perceptron neural network is able to classify all the regular network topological patterns after 2 epochs of training, which means very efficient learning.

[4] Zygmunt J. Hass, Siamak Tabrizi, “On Some Challenges and Design Choices in Ad-hoc Communications”, Military Communications Conference, 1998. MILCOM98 Proceedings, IEEE, Volume: 1, 18 - 21 October 1998. Page(s): 187-192.

[5] Vincent D. Park and M. Scott Corson, “A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks”, Proceedings of IEEE INFOCOM ’97, Kobe, Japan (April 1997).

[6] Elizabeth M. Royer, University of California, Santa Barbara, Chai-Keong Toh, Georgia Institute of Technology, “A Review of Current Routing Protocols for Ad-hoc Mobile Wireless Networks”, IEEE Personal Communications. April 1999.

[7] Vincent D. Park and M. Scott Corson, “A Performance Comparison of the Temporally-Ordered Routing Algorithm and Ideal Link State Routing”, Proceedings of IEEE symposium on Computers and Communication 98, Athens, Greece (June 1998).

[8] Lada A. Adamic 1, Rajan M. Lukose1, Amit R. Puniyani2, and Bernardo A. Huberman 1, “Search in Power-Law Networks” 1 HP Labs, Palo Alto, California 94304, 2Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, California 94305, 26 September 2001. Figure 8: Total S quared Error for the Classification of Regular Network Topological Patterns using Eigenvalues of Adjacency Matrix

[9] Aaron Kershenbaum “Telecommunications Network Design

4. CONCLUSION

[10] Ke Qiu, Sajal K. Das “Interconnection Networks and their

Our results show that it is possible to distinguish networks with different topologies. The way a neural network distinguishes different topologies by looking into the adjacency matrix as well as the eigenvalues of the adjacency matrix. In both cases, Perceptron neural network successfully classify different network topological patterns.

Algorithms” McGRAW-HILL INTERNATIONAL EDITIONS, Computer Science Series, International Editions 1993. Eigenvalues” Proceedings of the International Symposium on Parallel Architectures, Algorithms and Networks (ISPAN’02), IEEE Computer Society, 2002.

[11] Gilbert Strang “Linear Algebra and its Applications”, New York: Academic Press, c1976 [12] William H. Press, Saul A. Teukolsky, William T. Vetterling,

5. FUTURE WORK The above results are the first step towards the exploitation of local topological information in ad-hoc networks. As the neural network can classify different network topologies with a short period of training, our next step would be to extend this method for networks with changing topology.

REFERENCES

Brian P. Flannery “Numerical Recipes in C, the Art of Scientific Computing”, 2nd Edition, 1992.

[13] Reka Albert 1,

2

and Albert -Laszlo Barabasi2, “Statistical Mechanics of Complex Networks”, 1School of Mathematics, 127 Vincent Hall, University of Minnesota, Minneapolis, Minnesota 55455. 2Department of Physics, 225 Nieuwland Science Hall, University of Notre Dame, Notre Dame, Indiana 46556, 6 June 2001.

[1] J. F. Hayes, J-SAC Board Representative, “Guest Editorial

[14] Laurene Fausett, “Fundamentals of Neural Networks,

Wireless Ad-hoc Networks”, IEEE Journal on Selected Areas in Communications, Vol. 17, NO. 8, August 1999.

Architectures, Algorithms, and Applications” Prentice Hall, Upper Saddle River, New Jersey 07458, © 1994.

[2] http://www.ietf.org/html.charters/manet-charter.html [3] Ram Ramanathan, Jason Redi, “A Brief Overview of Ad-hoc Networks: Challenges and Directions,” IEEE Communication Magazine, 50th Anniversary Commemorative Issue - May 2002.

[15] Matlab 6.1 Help, “Using the Neural-Network Toolbox”, Perceptron Neural Networks.