Systematic approach Anyone who studies Internet technology quickly learns the importance of distributed algorithms to its design and operation. Routing protocols are an obvious example of such algorithms.
I remember learning how link-state routing works and appreciating the elegance of the approach: each router informing its neighbors of its local view of the network; flooding these updates until each router has a complete picture of the network topology; then each router running the same shortest path algorithm to provide (mostly) loop-free routing. I think it was this elegance and the mental challenge of understanding how such algorithms work that made me a ânetworked personâ for the next thirty years.
The idea of ââdecentralization is firmly anchored in the architecture of the Internet. The definitive article on the original Internet design is “The Design Philosophy of the DARPA Internet Protocols” by David Clark published [PDF] in 1988. Near the top of the list of design goals we find âInternet communication must continue despite the loss of networks or gatewaysâ and âInternet must allow distributed management of its resourcesâ. The first objective leads directly to the idea that there should be no single points of failure, while the second says more about how network operations should be decentralized.
The idea of ââdecentralization is well anchored in the architecture of the Internet
When I worked in the development team MPLS in the late 1990s, we absolutely believed that every algorithm had to be fully decentralized. MPLS Traffic Engineering (TE) and MPLS-BGP VPNs were designed to use fully distributed algorithms without a central point of control. In the case of TE, we realized early on that centralized algorithms could come close to providing optimal solutions, but we saw no way to put these algorithms in the hands of users, given the fundamentally distributed nature of routing.
Ultimately, the idea that centralized algorithms could do better came with software-defined networking. Google with B4, and Microsoft with SWAN [PDF] both have found a way to improve MPLS-TE using centralized path selection algorithms, using an SDN controller to push centrally calculated paths to routers that implement a distributed data plane. And MPLS VPNs now face a serious challenge from SD-WAN solutions, which centralize control of VPN tunnel creation to provide an operationally much simpler solution than that provided by MPLS.
Many people who had internalized the lessons of distributed network architecture found it difficult to accept SDN because the concept of centralized control was so at odds with everything we thought about network design best practices. What drove me to the SDN camp was the realization that you can build scalable, fault-tolerant networks with centralized control as long as you leverage ideas from outside the networking community.
Consensus algorithms like Paxos and Raft, for example, are at the heart of most SDN controllers, allowing them to scale and tolerate component failures. SDN enables logical centralization of control without introducing the drawbacks of bottlenecks or single points of failure. And this has produced substantial benefits, such as the ability to expose a network-wide API, greatly simplifying the problem of network configuration and paving the way for automated network provisioning.
SDN hasn’t made the internet any less decentralized, either. There are still hundreds or thousands of ISPs, the domain name system is still decentralized, and the stand-alone systems are still managed independently of each other.
Platforms like Google, Facebook and Twitter … present a rather monolithic view of the Internet to billions of users
But there is one aspect of centralization to be concerned with, and that is the platforms that determine the number of people who use the Internet. While technically platforms like Google, Facebook, and Twitter are impressively designed distributed systems, they present a rather monolithic view of the Internet to billions of users. This vision of how the real services we consume on the internet have become increasingly centralized is well captured in a blog post by Chris Dixon from a16z. A similar view was fine illustrated by one of my favorite cartoonists, The Oatmeal: “Reaching people on the Internet in 2021”.
Dixon and Oatmeal both point out the downsides of leaving too much control in the hands of the big rigs. For example, central platforms can suddenly change policies to keep users away from content provided by a creator.
There are more technical examples in which the widespread use of a single platform has resulted in a great unavailability of Internet services. For example, the Fastly outage of 2021 had a global impact on sites that depended on its CDN (like the New York Times and Amazon); a few days later, an outage at Akamai had a similar effect; Cloudflare’s failures in 2020 provide another example of a platform issue with massive impact. There is an interesting Blog of Cloudflare discussing another high-impact outage, which dates back to Raft failing to elect a leader under certain parameters and conditions of failure. Essentially, a flaw in a distributed algorithm has created a single point of failure for many customers.
It’s worth going back to Clark’s 1988 Internet Philosophy article and noting that while the Internet still works when routers and gateways fail, satisfying goal number one, many services and websites now fail when a platform they depend on (such as a CDN) fails. . Indeed, single points of failure were unintentionally introduced. And while the distributed management of the Internet continues, much of the services we depend on are managed by a small number of entities.
Some of these problems are easier to solve than others. The Oatmeal cartoon points to a subscription email service as a way to bypass the central custodians of content. It may become a good practice to start using multiple CDN providers. And it is claimed that blockchains could lead to a more decentralized internet (see Dixon’s post above). Decentralized finance is an example of how blockchains have created an opportunity to decentralize historically centralized functions. Non-fungible tokens (NFTs) offer artists and creators a possible route to reach their audiences without central entities (record labels, streaming services, auction houses). At the same time, there is a lot of justified skepticism about the long-term potential of blockchains and cryptocurrencies to move beyond the current speculative phase.
It seems that the pendulum has swung heavily towards centralization with the rise of a few giant internet companies controlling the way billions of people experience the internet, and this pendulum is showing signs of slowing down, if not starting to switch into the next. meaning. Decentralization is a pillar of the Internet’s architecture that has been critical to its success, and we are now seeing a wide range of efforts to return to its decentralized roots. Hopefully at least some will be successful. Â®