Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Network emergence

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Nächste SlideShare
Paavo Pylkkänen
Paavo Pylkkänen
Wird geladen in …3
×

Hier ansehen

1 von 25 Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (13)

Ähnlich wie Network emergence (20)

Anzeige

Network emergence

  1. 1. Network Emergence & Genetic Evolution
  2. 2. EMERGENCE IN THE SCIENCES the act or an instance of emerging. In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own. These properties or behaviors emerge only when the parts interact in a wider whole. For example, smooth forward motion emerges when a bicycle and its rider interoperate, but neither part can produce the behavior on their own. Emergence plays a central role in theories of integrative levels and of complex systems. Network science provides an abstract representation that is tailored to each discipline such as engineering, physics, and biology.
  3. 3. Emergence in Social Science In the social sciences, N is a set of actors, and L defines some relationship among actors. For example, N may be the set of people who work together in an office, and L may be the lines on an organization chart. Alternatively, N may be a set of people who have contracted a communicable disease from one another, and L may be the interactions that led to contraction of the disease. Emergence is more than a network’s transformation from an initial state to a final state. In the physical and biological sciences, “emergence is the concept of some new phenomenon arising in a system that wasn’t in the system’s specification to start with” (Standish, 2001). This definition refers to the repeated application of microrules that result in unexpected macrostructure. For example, a network’s degree sequence distribution is one way to characterize its macrostructure, while a link- rewiring microrule might characterize its microstructure. There is no apparent connection between the degree sequence distribution and a certain rule for linking node pairs. Still, some “new phenomenon” might unexpectedly arise from repeated application of simple rewiring. For example, the “new phenomenon arising” might be a scale-free degree sequence distribution arising from the evolution, even though “scale-free structure” was not in the system’s initial specification. This “new phenomenon” was unexpected because preferential attachment works at the local level while degree sequence distribution is a global property.
  4. 4. Emergence in Physical Science In the physical sciences, emergence is used to explain phenomena such as phase transition in materials (gases cooling and changing to liquids, etc.) or Ising effects (magnetic polarization). In thermodynamics, for example, emergence links largescale properties of matter to its microscale states. Specifically, emergence links the temperature (large-scale property) of a block of ice to the states of its molecules (microscale property); as water cools below its freezing point, individual molecules change phase according to the microrules of physics, and the body of water changes from liquid to solid (macroscale property). In network theory, this is equivalent to linking the classification of a network to its entropy; a random network has greater entropy than does an equivalent regular network. At what point does a random network become regular? Emergence appears to stem from microbehavior at the atomic level (e.g., at the level of nodes and links). It produces macroscale patterns from microscale rules. Often there is little or no obvious connection between the micro- and macrolevels. This has led to the concept of hidden order— unrecognized structure within a system (Holland, 1998). What appears to be chaos is actually nonlinear behavior. Hidden order may be a matter of scale—what is impossible to recognize up close becomes obvious when one steps back and views it at a distance. For example, a close-up view of a painting may seem indistinguishable from random paint smears, but when viewed from a distance, is easily recognized as the famous Mona Lisa. Is emergence simply a change in scaling factor?
  5. 5. Emergence in Biology Emergence in networks and natural species of plants and animals is ratherobvious. In fact, some contemporary biologists and natural historians claim that life itself is the product of emergence—once called spontaneous generation. Life arose spontaneously over a long period of time, by the repeated application of very small steps called mutations. Beginning with inanimate chemicals and purely chemical processes, simple living organisms emerged through a lengthy process of trial and error. Biological emergence requires that we believe in increasing complexity at the expense of diminishing entropy. On the application of each microstep (chemical reaction), randomness is replaced by structure. Structure evolves through further application of microrules (absorption of energy) toreplace simple structurewith more complex structure. At some point, the inanimate structure becomes animate— complexity reachesthe level of a living organism. This process continues, diversifies, and reaches higher levels of complex structure. Ultimately, the Darwinian rules of evolution dominate, leading to the emergence of intelligence.
  6. 6. Simple-to-complex structure emergence has been demonstrated under controlled conditions, but no one has demonstrated the emergence of life from nonlife. Organic substances have been spontaneously generated from inorganic chemicals, but this is a far cry from the spontaneous generation of a living organism from organic chemicals. As scientists, we must remain skeptical of this theory.
  7. 7. GENETIC EVOLUTION Open-loop emergence originates from within the network itself. The network absorbs energy and forms new nodes and links or rearranges existing nodes and links. Emergence is dynamic— microrules applied once per time step eventually lead to significant transformation of the network. Over long expanses of time, the network reaches a final state, if the emergence is convergent. If it is divergent, the network never reaches a final state and cycles through either a finite or an infinite number of states. For example, suppose that a network with n nodes and m , n links adds one link at each timestep, until the network becomes complete. This convergent process ends when the network reaches its final state with m¼(n(n21))/2 links. On the other hand, a network that adds a new node and new link at each timestep, never reaches a final state. Instead, it diverges, adding nodes and links without end. Genetic emergence is simple—repeatedly apply microrules at each timestep and observe the results. Does the network converge? In most cases, we conjecture that a certain pattern will emerge after a sufficiently long time. Hence, we can test “cause and effect” hypotheses.
  8. 8. For example, if we repeatedly replace lowerdegreed nodes with higher-degreed nodes, we conjecture that a random network evolves into a scale-free network. But conjectures may not be true. In fact, the first illustration of open-loop emergence, below, shows this conjecture to be wrong! The point is that we can test hypotheses and conjectures in a search for cause– effect explanations of how natural and fabricated systems work.
  9. 9. Hub Emergence Consider the following open-loop emergent process. Initially, G(0) is a random network with n nodes and m links. G(0) may be created by the ER generative procedure or the anchored random network procedure described earlier .At each time step, select a node and link at random and ask the question, “Can we rewire the randomly selected link such that it connects to a higher degreed node?” In this case, the randomly selected node is selected if its degree is higher than that of the randomly selected link’s head node. The link is rewired to point to the higher- degreed node or left as is. This simple micro rule repeats forever. We conjecture that a scale-free network will emerge from the random network because over a long period of time a hub with very high degree emerges. After a sufficient length of time, does the degree sequence of G(0) transition from a Poisson distribution to a power law? We test this hypothesis in the following analysis.6 The “hub emergence” micro rule is very simple—rewire a randomly selected link whenever it increases the degree of a high-degreed node. Network.jar repeats the following Java method for implementing the hub emergence micro rule for as long as the user desires:
  10. 10. Is this process another application of the law of increasing returns? After thousands of timesteps, does a scale-free network emerge? We can check our hypothesis by simply inspecting the degree sequence that emerges from 160,000 timesteps! Figure 7.2a shows the degree sequence distribution of G(0)—the random network—and Fig. 7.2b shows the distribution after 160,000 iterations. If the convergent network G(160,000) had evolved into a scale-free network, its degree sequence distribution would be a power law. Clearly, this is not the case. Figure 7.2b shows a skewed Poisson distribution, instead. Its smallest degreed nodes have one link, its largest degreed node has 178 links, and the peak of the distribution is at 4 links! Contrast this with the random network: a maximum hub with 18 links, minimum hub with zero links, and a peak at 10 links.
  11. 11. Cluster Emergence Hub emergence leads to non-scale-free networks with hub structure. Is it possible to construct a non-small-world network with high cluster coefficient? The answer is “Yes,” as we show next. Beginning once again with a random network, suppose that we use feedback-loop emergence as shown in Fig. 7.1b to enhance the cluster coefficient of an emergent network. After each time step we guarantee that the overall cluster coefficient of the network is no less than it was in the prior time step. Over time this network will increase its clustering—at least in theory. Cluster coefficient emergence works as follows. Select a random link and random node. Rewire the link to point to the new (random) node, if the overall cluster coefficient remains the same or is increased. If the cluster coefficient decreases as a result of rewiring, revert to the topology of the previous time step. Repeat this micro rule indefinitely, or until stopped.
  12. 12. DESIGNER NETWORKS “most of the time.” Given a degree sequence g, we can construct a network containing nodes with sequence g, if such a topology is realizable.7 The desired network may not be realizable without allowing duplicate links between node pairs, however. In some cases, we may have to sacrifice precision for practicality (no duplicates) in order to come close. In this section, we show that restricting the total number of node degrees to the constraint Pg¼2m, where m is the number of links in G, and using the proper preferential attachment algorithm, we can produce any realizable network from an arbitrary starting point. Rewiring links according to degree sequence g following the constraints described below, degree sequence emergence produces a customized or “designer network” with exactly the topology we want. Weusemethod NW_doSetStates(total_value, mean_value)tostore the desired degree sequence g in the nodes of the initial network G(0).
  13. 13. When stored in each node as its state or value, the elements of g are called residual degree or residual values. The objective is to transform G(0) to G(t) such that the degree sequence of G(t) is exactly g. The initial elements of g will be decremented during the evolution of the desired network, so that the emerged network matches g. If all values are zero after evolution, the network has converged to the desired degree sequence, g. The first parameter of NW_doSetStates(total_value, mean_value) is typically set to 2m because each link connects two stubs, and the second parameter is typically set to the desired network’s average degree value, l ¼n/m. If total value is less than 2m, the method reports an error message, and returns. If mean value is too large, the network will not converge because it cannot insert enough links to realize degree sequence g. The state s of each node is limited to a minimum value of one and a maximum value of twice the average degree: 1 2 l . The minimum degree value ensures that the evolved network is connected. The maximum degree of any node is n21 (n 2 2 plus minimum of 1 assigned to all nodes), because duplicate links are not allowed, and each node is able to connect to at most (n 21) others.
  14. 14. If the total number of degrees is odd, then at least one degree will remain unused because links consume degrees in pairs. This method assumes the total number of degrees assigned to all nodes to be an even number. Therefore, parameter total_value must be an even number. The following parameters guarantee satisfactory results: Method NW_doSetStates() loads the initial network G(0) with g, in preparation for emergence. In addition to constraints on total_value and mean_value, the method must guarantee an even number of stubs, stubs less than the maximum possible (n 21), and handle extreme values of n and m.
  15. 15. Degree Sequence Emergence Degree sequence emergence attempts to transform an arbitrary network into a network with a prescribed degree sequence. Given an arbitrary starting network G(0) and degree sequence g¼fd1,d2,...,dng, where di¼degree of node i, g¼ fd1,d2,...,dng, wheredi¼degree of node i, evolve network G(final), into a network with degree sequence g. The target degree sequence g is stored in the networkasvalue(vj)¼dj,foreachnode vj inG(0).Weclaimthatthedegreesequence of G(t) converges to g when Pg¼2m, and duplicate links are allowed. Degree sequence emergence does not converge to g whenPg , 2m because there are too many links. Emergence either stops or finds many approximations to g, when Pg . 2m. When G(final) is realizable, degree sequence emergence produces a “designer” network as specified by input values g. The objective of degree network emergence is to decrease the difference between the initial degree sequence of G(0) and the specified degree sequence g stored in each node asthe node’s state or value. This may be possible by rewiring linksthat connect to nodes with too many links, thereby decreasing each node’s degree until it matches—or approximates—the value specified in g. We do this by disconnecting one or both ends of a link and reconnecting the broken link to a randomly selected node. Note that we specifically do not attempt to increase the degree of nodes by purposely attaching links to nodes with a deficiency of links.
  16. 16. For example, suppose that G(0)¼fN(0),L(0),fg, n¼4,and g¼f2,2,1,3g.Notethat Pg¼2þ2þ1þ3¼(2)(4)¼8¼2m. Figure 7.7a shows G(0) before emergence, and Fig. 7.7b shows G(100) after 100 timesteps have elapsed. Initially, the degree sequence g¼f2,3,2,1g, but after approximately 100 timesteps, the network topology is changed to the objective: g¼f2,2,1,3g. Emergence has closed the gap between the initial and desired degree of G. Degree Sequence Emergence (per Node Value) 1. Store g ¼f d1,d2,...,dngin the value field of nodesfn1,n2,...,nngof G(0), for example, vi¼value(ni)¼di. 2. Repeat indefinitely a. Select a random link Land random node r from G.L.head points to the head node of link L, and L.tail connects to the tail node of L.
  17. 17. Generating Networks with Given Degree Sequence The foregoing raises a question regarding the creation of networks with a specific topology. We know how to generate a random network, a scale-free network, and a small-world network, but is it possible to generate any network desired with a given degree sequence? Molloy and Reed showed how to do this using an algorithm like the one described above (Molloy, 1995). The Molloy–Reed algorithm embodies the right idea, but has one major weakness—it does not guarantee the desired topology every time. The MR algorithm creates di stubs (half-links that lack the other endpoints) at each node, vi, with assigned value di. Then, it performs a kind of preferential attachment selection process to connect the loose end of each stub to another loose end at another node. Because links must connect pairs of nodes, we assume Pg to be an even number. The process of connecting stubs repeats until all stubs are connected or we run out of available nodes.
  18. 18. MR is implemented in Network.jar as method NW_doMolloyReed(). First, the method creates n nodes and stores elements of g in each node’s value field. The total of all node values, Pg, must be an even number. Then it attempts to connect Pg stubs together by random selection of nodes with yet-to-be-linked stubs. Initially, the value stored at each node is equal to the initial residual degree specified by g. Each time the algorithm m converts a pair of stubs into a link, the residual degree is decremented. If all residual degree values reach zero, the desired network emerges. However, there is no guarantee that all node values will be decremented to zero. In fact, it is likely that one or more stubs will fail to match with another stub. This requires that the preferential attachment loop give up after numerous unsuccessful attempts. This limit on the method prevents infinite looping when the emergent behavior does not converge.

×