Networks and Heterogeneous Media
June 2008 , Volume 3 , Issue 2
A special issue on Networks and Complexity
Select all articles
This issue of Networks and Heterogeneous Media (NHM) collects selected papers of the European Conference on Complex Systems ’07, which took place in Dresden, Germany, from October 1 to 6, 2007.
Studying complex systems has enormously changed our view of the world. The discovery that actions and reactions are often disproportionate and that small per- turbations can cause tremendous responses, has led to new scientific disciplines such as catastrophe theory, chaos theory, and the theory of phase transitions.
For more information, please click on the Full Text: "PDF" button above.
We study the mean field approximation of a recent model of cascades on networks relevant to the investigation of systemic risk control in financial networks. In the model, the hypothesis of a trend reinforcement in the stochastic process describing the fragility of the nodes, induces a trade-off in the systemic risk with respect to the density of the network. Increasing the average link density, the network is first less exposed to systemic risk, while above an intermediate value the systemic risk increases. This result offers a simple explanation for the emergence of instabilities in financial systems that get increasingly interwoven. In this paper, we study the dynamics of the probability density function of the average fragility. This converges to a unique stationary distribution which can be computed numerically and can be used to estimate the systemic risk as a function of the parameters of the model.
We investigate some of the properties and extensions of a dynamic innovation network model recently introduced in . In the model, the set of efficient graphs ranges, depending on the cost for maintaining a link, from the complete graph to the (quasi-) star, varying within a well defined class of graphs. However, the interplay between dynamics on the nodes and topology of the network leads to equilibrium networks which are typically not efficient and are characterized, as observed in empirical studies of R&D networks, by sparseness, presence of clusters and heterogeneity of degree. In this paper, we analyze the relation between the growth rate of the knowledge stock of the agents from R&D collaborations and the properties of the adjacency matrix as- sociated with the network of collaborations. By means of computer simulations we further investigate how the equilibrium network is affected by increasing the evaluation time $\tau$ over which agents evaluate whether to maintain a link or not. We show that only if $\tau$ is long enough, efficient networks can be obtained by the selfish link formation process of agents, otherwise the equilibrium network is inefficient. This work should assist in building a theoretical framework of R&D networks from which policies can be derived that aim at fostering efficient innovation networks.
Recently we showed that a simple model of network rewiring could be solved exactly for any time and any parameter value. We also showed that this model can be recast in terms of several well known models of statistical physics such as Urn model and the Voter model. We also noted that it has been applied to a wide range of problems. Here we consider various generalisations of this model and include some new exact results.
The structure of networks can be characterized by the frequency of different subnetwork patterns found within them. Where these frequencies deviate from what would be expected in random networks they are termed “motifs” of the network. Interestingly it is often found that networks performing similar functions evidence similar motif frequencies. We present results from a motif analysis of networks produced by peer-to-peer protocols that support cooperation between evolving nodes. We were surprised to find that their motif profiles match closely protein structure networks. It is currently an open issue as to precisely why this is.
More than half of Internet traffic today is contributed by peer-to-peer (P2P) systems. Yet P2P systems build their overlay topology largely agnostic of the Internet underlay, which often leads to traffic management challenges for Internet Service Providers (ISP) and potentially inefficient neighbourhood selection for P2P nodes. To overcome this, we propose to use an oracle hosted by the ISPs, so that ISPs and P2P users can cooperate for improved performance. The oracle can be queried by P2P nodes while choos- ing neighbours for content search, and it will rank the possible neighbours of the querying node according to a locality indication, like location within the same AS, or the AS-hop distance. The ISP gains by keeping traffic within its Autonomous System (AS), and the P2P node can experience improved performance like lesser delay and better bandwidth.
In this paper, we evaluate the benefits of our scheme by performing experiments in a real Testlab as well as a simulation framework. We show how we configure representative AS topologies for P2P networks, and present experimental results with content search phase of a P2P network using different file sharing and search query distributions.
We propose a series of methods to reconstruct and represent the evolution of a field of science at different levels: namely micro, meso and macro levels. We use a previously introduced asymmetric measure of paradigmatic proximity between terms that enables us to extract structure from a large publications database. We apply our set of methods on a case study from the complex systems community through the mapping of more than 400 complex systems science concepts indexed from a database as large as several millions of journal papers. We will first summarize the main properties of our asymmetric proximity measure. Then we show how salient paradigmatic fields can be embedded into a 2-dimensional visualization into which terms are plotted according to their relative specificity and generality index. This meso-level helps us producing macroscopic maps of the field of complex systems science, built upon the former paradigmatic fields and their articulations.
The modeling of realistic networks is of prime importance for modern complex systems research. Previous procedures typically model the natural growth of networks by means of iteratively adding nodes, geometric positioning information, a definition of link connectivity based on the preference for nearest neighbors or already highly connected nodes, or combine several of these approaches.
Our novel model brings together the well-know concepts of $k$-cores, originally introduced in social network analysis, and of preferential attachment. Recent studies exposed the significant $k$-core structure of several real world systems, e.g., the AS network of the Internet. We present a simple and efficient method for generating networks which at the same time strictly adhere to the characteristics of a given $k$-core structure, called core fingerprint, and feature a power-law degree distribution. We showcase our algorithm in a com- parative evaluation with two well-known AS network generators.
Different types of macroscopic reaction kinetics can be derived from microscopic molecular interactions, with the law of mass action being the most widely used one in standard situations. After such a modeling step, where primarily the types of reactions are identified, it becomes a problem to analyse qualitative properties of complete regulatory networks. This problem has to be tackled, because chemical reaction networks play a part in some of the most fundamental cellular processes such as cell metabolism and regulation of cell signalling processes. This paper discusses how reaction networks can be described and analysed by graph theoretic means. Graph theory is a useful analysis tool for complex reaction networks, in situations where there is parameter uncertainty or modeling information is incomplete. Graphs are very robust tools, in the sense that whole classes of network topologies will show similar behaviour, independently of precise information that is available about the reaction constants. Nevertheless, one still has to take care to incorporate sufficient dynamical information in the network structure, in order to obtain meaningful results.
It is widely believed that the Internet's AS-graph degree distribution obeys a power-law form. However, it was recently argued that since Internet data is collected in a tree-like fashion, it only produces a sample of the degree distribution, and this sample may be biased. This argument was backed by simulation data and mathematical analysis, which demonstrated that under certain conditions a tree sampling procedure can produce an artificial power-law in the degree distribution. Thus, although the observed degree distribution of the AS-graph follows a power-law, this phenomenon may be an artifact of the sampling process. In this work we provide some evidence to the contrary. We show, by analysis and simulation, that when the underlying graph degree distribution obeys a power-law with an exponent $\gamma>2$, a tree-like sampling process produces a negligible bias in the sampled degree distribution. Furthermore, recent data collected from the DIMES project, which is not based on single source sampling, indicates that the Internet indeed obeys a power-law degree distribution with an exponent $\gamma>2$. Combining this empirical data with our simulation of traceroute experiments on DIMES-measured AS-graph as the underlying graph, and with our analysis, we conclude that the bias in the degree distribution calculated from BGP data is negligible.
This paper describes the effects of perturbations, which simulate the knock-out of single genes, one at a time, in random Boolean models of genetic networks (RBN). The analysis concentrates on the probability distribution of so-called avalanches (defined in the text) in gene expression. The topology of the random Boolean networks considered here is of the scale-free type, with a power-law distribution of outgoing connectivities. The results for these scale-free random Boolean networks (SFRBN) are compared with those of classical RBNs, which had been previously analyzed, and with experimental data on S. cerevisiae. It is shown that, while both models approximate the main features of the distribution of experimental data, SFRBNs tend to overestimate the number of large avalanches.
Over the past several years, a number of measures have been introduced to characterize the topology of complex networks. We perform a statistical analysis of real data sets, representing the topology of different real-world networks. First, we show that some measures are either fully related to other topological measures or that they are significantly limited in the range of their possible values. Second, we observe that subsets of measures are highly correlated, indicating redundancy among them. Our study thus suggests that the set of commonly used measures is too extensive to concisely characterize the topology of complex networks. It also provides an important basis for classification and unification of a definite set of measures that would serve in future topological studies of complex networks.
The metabolic networks are very well characterized for bacterial such of E.coli. For this reason they provide a a very interesting framework for the construction of analytically tractable statistical mechanics models. In this paper we introduce a solvable model for the distribution of fluxes in the metabolic network. We show that the effect of the topology on the distribution of fluxes is to allow for large fluctuations of their values, a fact that should have implications on the robustness of the system.
We consider the $k$-core decomposition of network models and Internet graphs at the autonomous system (AS) level. The k-core analysis allows to characterize networks beyond the degree distribution and uncover structural properties and hierarchies due to the specific architecture of the system. We compare the $k$-core structure obtained for AS graphs with those of several network models and discuss the differences and similarities with the real Internet architecture. The presence of biases and the incompleteness of the real maps are discussed and their effect on the $k$-core analysis is assessed with numerical experiments simulating biased exploration on a wide range of network models. We find that the $k$-core analysis provides an interesting characterization of the fluctuations and incompleteness of maps as well as information helping to discriminate the original underlying structure.
We introduce a tentative classification scheme for empirical networks based on global qualitative properties detected through the spectrum of the Laplacian of the graph underlying the network. Our method identifies several distinct types of networks across different domains of applications, indicates hidden regularity properties and provides evidence for processes like node duplication behind the evolution or construction of a given class of networks.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]