Peering and central data centers: a trend on the way to becoming a must

Central data centers are becoming real drivers of performance that drive many infrastructure choices. This dynamic is based on the growth of peering.

Little known to the general public, they are more network experts. Central data centers are relays of connectivity. These are real market places. To put it more simply, they bring together almost all the players in the digital value chain. The challenge is to know how to identify them, to be able to recognize them in order to open a point of presence. A network PoP that will be highly interconnected and that will benefit from an excellent ROI because of the savings achieved through direct interconnections.

Usually these three types of data centers are organized in a loop where each link has a specific role in the organization of a computer architecture. Generally, hyperscalers use central data centers as relays to get closer to their customers. Proximity data centers also rely on these to fetch live traffic. It is thus common to see actors hosting their application in a hyperscale, deploy their IT in an Edge and ensure an optimized network path by creating peering links in a core data center.

Central data centers therefore have a very specific importance and are becoming real levers of performance that determine many infrastructure choices.

How to identify a central data center? The simplest is to consult the basics of referencing network players such as PEERING DB, and to search the data center that records the largest number of members. Once this operation is done, the analysis can be completed in a simple way. These datacenters have differentiating elements: they have a lot of members, they have amortized network equipment and they offer a large number of available network ports. These also offer an extremely wide choice of actors that allows an applicant to be certain of being faced with an offer corresponding to his specific need.   

The benefits offered by these central data centers are numerous but peering is their main attraction. What we are all looking for is interconnection. Either as a user or as a content provider. In a central data center, everyone is equal: everyone shares data via a physical connection from point A to point B. Regardless of the nature of the interconnection I am going to choose, peering , direct interconnection or transit, I know that everyone is remote cable from my rack: I will benefit from this “clustering effect”. And the effect is virtuous: the more a central data center brings together actors, the more interconnection there is, the less expensive it is. 

Peering is a strong trend that is becoming a must. In 2017, a study related to the measurement of traffic in French ISPs published by ARCEP indicates that the data exchanged on the territory are distributed as follows: 46% for private peering, 50% for transit and 4% for public peering. We have observed for comparison what we had in one of our central data centers trying to reproduce this pattern and the finding is very interesting: we find the same ratios measured by the ARCEP in the interconnection statistics between our customers.

The most interesting is to stop on the curve of evolution of these interconnections in time. Between 2017 and 2018 we observe that the share of transit is going down significantly. Public peering is gaining momentum and private peering is increasing dramatically. Thus, there is a strong trend in the choices of network managers who recommend going live and peering.

This dynamic has three main consequences. The first is that the content players will get closer to the end customers. They will – in the short and medium term – “bypass” the hosts and forwarders. This is a phenomenon we see for example with Amazon Direct Connect. The second trend is that freight forwarders who see their business down will try to recover on the CDN link the margins they are losing. The third, less important, is that the ISPs themselves will try to get closer to the end customer by including content in their offers. 

We have emphasized several times a key point: the future of the flows is strongly linked to the choices of network managers who recommend more and more to pass live traffic. In this situation, several good practices deserve to be shared.

First, start with the application. Before choosing where I will host my IT, it is necessary to question the nature of the latter. What are we talking about? Is this a third-party application easily deployable in SaaS mode on the public cloud or is it an industrial IS that must be hosted on my own machines?

Depending on the answer, I will organize my architecture. The challenge is – as we said – creating network access that makes the user experience easier and reduces costs. Depending on their priority and the level of security required, the applications will be distributed between Core, EDGE, hyperscale data centers.

Second, how to bring the user closer to these applications? The alternative is quite simple: either I use peering or direct interconnections, or I put my application locally in my data center and I mount a private network link to my end user.

The meaning of the story seems to go towards a transformation of the IT proxy into a buyer. IT managers are now able to organize these outsourcing choices in these three types of data centers. By answering specific questions – which application must be hyper connected? Which application is to be hosted in its data center? What is the application that can be hosted in a public cloud? – emerging business challenges: how to reduce the cost and latency for the user? How to increase security? … Our business choices become business choices.