The study of how networks form, grow and change over time is a relatively new area of research, but it is critical to understand how to foster the development of some types of networks and reduce the development of others. For example, researchers have studied innovation as a process of diffusion across a network.1 Traditionally, research in graph theory focuses its attention very much on studying graphs that are static. However, almost all real networks are dynamic in nature and how they have evolved and changed over time is a defining feature to their topology and properties. As network theory is a very new subject, much of it is still focused on trying to explore the basics of static graphs, as the study of their dynamics results in the addition of a whole new set of parameters to our models and takes us to a new level of complexity; much of which remains unexplored, and is the subject of active research.
The development of a network may involve adding more nodes to it, but also more interestingly, adding links to it that increase the overall connectivity. In random network models, links are just placed between nodes at random with some given probability. “Growing” the network here just meant increasing this probability so as to have more links develop over time. One interesting thing we find when we do this is that there are thresholds and phase transitions during the network’s development.2 By thresholds we simply mean that by gradually increasing the link probability parameter, some property to the network suddenly appears when we pass a critical value.
The first threshold is when the average degree goes above 1, over the total number of nodes in the network, as at this threshold, we start to get our first connection. At degree one, that is, when every node has on average one connection, the network starts to appear connected. We see one giant component emerging within the network, i.e. one dominant cluster, and we start to have cycles, which means there are feedback loops in the network. Another threshold occurs when nodes have an average degree of log(n) at this point everything starts to be connected, meaning there is typically a path to all other nodes in the network.3 This is what we see in random networks, but most real-world networks are not random as they are subject to some resource constraints and they have preferential attachment, giving them clusters that we do not see in these random graphs.
One way of thinking about how real-world networks form is through the lens of percolation theory.4 Percolation theory looks at how something filters or percolates through something else, like a liquid filtering through some mesh structure in a material. Or we might think about some water running down the side of a hill. As it does, the water will find the path of least resistance, creating channels and furrows on the side of the hill. This network formation is then the product of the resource constraints that its environment placed upon it, but the constraints are unevenly distributed and the network’s topology is then reflecting this as it follows the paths of least resistance avoiding the toughest material. In order to demonstrate the general relevance of this, we will take some other examples. If say, we put on cheap flights from one city to another then people will start using this transportation link because of financial constraints. Or because of the phenomenon of homophily within social networks, we will get the same percolation dynamics where it will be easier for people to make links with people who are similar to themselves than with others; again creating a particular structure based on the social constraints within the system.
To incorporate this into our model, we would need to add in the attributes or properties of the nodes within the network where they will form links depending upon these properties. In a social network, these attributes might be age, gender, income etc. and people will have a preferential attachment. We would then create a probability for the likelihood that any node will connect with another of the same kind compared to whether it will make a connection with a node of a different kind. The result of including these factors into our representation would be a much more realistic model where we see local clustering and some distant relations. Behind this relatively abstract model to network development presented here is a much more complex set of questions about the local incentives of the nodes in the network. Another way of looking at network formation within this model is from the perspective of the nodes and the local rules they are operating under. To do this, we might use game theory that looks at the incentives of the individuals within the network and their payoff for forming a connection or breaking one.
Another key driver behind the formation of real-world networks is the so-called network effect and Metcalfe’s law, which states that the value of a network grows as a square of the number of the nodes in the network.5 The network effect arises when users derive value from the presence of other users, with the telephone being a classical example. The more people that join the network the more valuable it is. Thus, there is a positive feedback loop where more people joining feeds back to make the network more valuable. That in turn draws more people and so on. In this way we can get the exponential growth that we have seen in the rise of many network organizations like Twitter and Facebook. We might note the development of these networks is nonlinear with distinct tipping points, because it requires a critical mass of users for something like a computer operating system to have any value. But once you have gone beyond that critical mass it then becomes very valuable because you are able to now interoperate with all these other users. This is why free is a good marketing strategy for some I.T. startups, because it is all about getting this critical mass. Once you have it then the network effect kicks-in and it become a “must have.”
Most real-world networks are not random, during their formation, they were subjected to certain environmental and resource constraints that shaped their formation as they developed in a particular non-random fashion. Added to this, most economic networks are user-generated. They have been formed out of local nodes choosing to make connections. Thus, both the local rules under which agents are making these connections and the environmental constraints they are under are both defining factors in the network’s formation. For example, if we take a trade network, we need to know what are the physical constraints and the socio-political constraints that are inhibiting the formation of the network, and inversely, what are the set of rules under which agents are choosing to make connections.
The growth of a network may be nonlinear, meaning there will likely be sub-linear growth up to a certain tipping point, and then positive feedback will kick-in to give us super-linear exponential growth. In this way, something like the Internet can lay relatively dormant for a long time and then take off rapidly. An important thing to recognize in the growth of a network is the fact that whereas the number of nodes in the network may grow in a linear fashion, as in one, two, three etc., the number of edges can grow in a super-linear fashion. With 1 node, we have 0 links. With 2 nodes, we can have 1 link. With 3, we can have 3 links. With 4 nodes, we can now have 6 links. With 5, we can have 10. With 6 nodes, we can have 15 possible links.
Whereas the number of edges started off lower than the number of nodes, it will likely sooner or later catch up with it and then outgrow it rapidly. In this example, the number of edges caught up with the number of nodes very quickly because we were talking about the maximum possible number of links, but typically in reality, not every node will be fully connected, and thus it may often take a lot longer for it to catch up. But once it does, we will start to move from a component-based regime to a relational regime and the connections will add significant value to the system. We will get a positive feedback loop and the system may then grow exponentially. This is called the network effect. For example, the network effect can be seen in stock markets and derivatives exchanges. Market liquidity is a major determinant of transaction cost in the sale or purchase of a security. As the number of buyers and sellers on an exchange increases, liquidity increases, and transaction costs decrease. This then attracts a larger number of buyers and sellers to the exchange. Thus, we get a positive feedback loop that is behind the network effect. In order to understand this process of network development better we will go over each stage individually.
In the initial phase of a network’s formation, due to the limited number of nodes and connections in the network, the value of joining that network may, in fact, be negative because of the opportunity cost. Joining this network may well exclude you from joining another more mature network that already has a lot of network value. For example, if you choose to adopt a Linux operating system, you will be limiting your capacity to interoperate with over one billion users of Windows. Thus, in terms of opportunity cost, you are actually having to pay to be part of this burgeoning Linux network, and the same would be true for a social network, digital currencies and many other types of networks that have not reached a critical mass. These early adopters are typically special interest users that particularly care about this service and are prepared to pay the opportunity cost. It is these early enthusiasts that really matter because with them your network may be able to reach the critical mass; without them, you will not. And reaching this critical mass beyond which the network effect will take hold is the key factor in the early formation of the network.
A key parameter here is how much of the value of using this system is in the components vs. the connections. If there is value inherent to the product without connecting it, such as would be the case for a washing machine, then early adoption is not very difficult. But other things are very much dependent upon their connections such as the telephone, where it will be very difficult to get the original users because there is no value in the system without the existence of others to connect to. The role of expectations is very important here as if people do not expect the network to grow they will not join and it will not reach the critical mass. If their expectations are positive then it may well reach this threshold.
If enough nodes join the network, then we may reach the critical mass and get a tipping point. The tipping point is the critical point in the system’s development as it defines where positive feedback will gain traction leading to rapid and irreversible state change. The term critical mass is said to have originated in the field of epidemiology when the spreading of an infectious disease reaches a point beyond any local ability to control it from spreading more widely.6 It is in many ways analogy to a phase transition. Marketers use the term to denote a threshold that, once reached, will result in additional sales. At the point of critical mass, the value obtained from the good or service is greater than or equal to the price paid for it. Beyond this, it becomes much more attractive for people to join as the value is continually going up as each new user joining creates a high surplus value for the next prospective user. With this positive feedback loop, we can get the bandwagon effect where agents couple to the network without any intrinsic evaluation for, or knowledge of the actual phenomena, but simply joint to gain the benefit of the network effect in the way that someone might adopt a certain ideology for fashion without knowledge of it, simply to be socially accepted.
The bandwagon effect can lead to overcapacity, as the increasing number of users generally cannot continue indefinitely. After a certain point, many networks become either congested or saturated, stopping future uptake. Congestion occurs due to overuse.7 As an example, we might think about the telephone network. While the number of users is below the congestion point, each additional user adds additional value to every other customer. However, at some point, the addition of an extra user exceeds the capacity of the existing system. After this point, each additional user decreases the value obtained by every other user.
If this is the case, then the next critical point is where the value obtained goes back down to where it approximates the price paid. The network will cease to grow at this point, and the system must be enlarged to enable future growth. This is the case for centralized systems, but may not be the case for distributed networks. New peer-to-peer network models such as Bitcoin may always defy congestion. True peer-to-peer networks are designed to distribute out the network’s load amongst their users. This theoretically allows peer-to-peer networks to scale somewhat indefinitely, at least until market saturation.
But there is also a flip side to the network effect and network development, which is crowding out and the lock-in. Due to the importance of interoperability within network economies, there is a strong attractor toward everything converging onto the same network, the same set of standards or protocols resulting in lock-in. Network effects are notorious for causing lock- in with the most-cited examples being Microsoft products and the QWERTY keyboard. In the previous example where the network effect created liquid markets, it is also apparent in the difficulty that startup exchanges have in dislodging a dominant exchange. For example, the Chicago Board of Trade has retained overwhelming dominance of trading in US Treasury bond futures despite the startup of Eurex US trading of identical futures contracts. Mitigating these negative externalities mean maintaining an open vendor-neutral network within which new standards and protocols can be incorporated. The success of the Internet is in many ways in its openness, net neutrality and the fact that no one owns it.
The rules under which a network was created and developed will play a large role in how something will spread across it and ultimately how robust it is to failure. The first thing to note with respect to network diffusion and robustness is that connectivity can both add and reduce to the system’s robustness. It works both ways. Connectivity is important for integrating the system and it is this integration that gives the system its overall robustness, but connectivity is also a potential pathway for disaster spreading. For example, in a recent paper entitled Systemic Risk and Stability in Financial Networks, one of their authors summarizes their findings as such: “We show that financial contagion exhibits a form of phase transition as interbank connections increase. As long as the magnitude and the number of negative shocks affecting financial institutions are sufficiently small, more ‘complete’ interbank claims enhance the stability of the system. However, beyond a certain point, such interconnections start to serve as a mechanism for propagation of shocks and lead to a more fragile financial system.”
There are a few key parameters that will greatly affect this process of failure propagation within these complex networks. Firstly, how contagious is the phenomenon that is spreading? An important consideration here is whether this is being powered by some negative feedback loop. Secondly, how resistant are the nodes in the network to this phenomenon? Thirdly, we need to consider the topology to the network. Is it centralized or decentralized? Centralized networks are more susceptible to certain kinds of attack. Lastly, we need to also take into account whether this failure is being spread strategically or at random, as different network topologies exhibit different vulnerability characteristics depending on how random the failure is.