Analysis: How are Faster Networks Advancing the New-Age Datacenters

We are witnessing significant uplift in data transmission speed offered by every network connectivity providers. Service providers are now promising speed in 100 of MB to GBs now with which we can see live streaming of blue ray movie print without any buffering. Such network speed set to trigger many new technology possibilities. Businesses cannot afford to stay behind, as they have to take into account new technologies, which are widely adopted by the competitive market landscape. Therefore, the focus of businesses has become clear and narrow; to constantly satisfy customer demands with lucrative digital offerings and push businesses ahead for gaining competitive advantage.

To align with the transformation trend businesses have already started to optimize and redesign their data centers to handle a vast amount of data generated by a growing number of consumer devices. It is obvious for businesses to transform the data center for addressing the upgradation need as user-generated data processed at central data centers. A transition involved the use of

  • Virtual Network Functions (VNFS), which replaces server hardware with software, based packages to specific work (Network Function Virtualization (NFV)).
  • Software defined networking to gain a central control of network using a core framework which will allow admins to define network operations and security policies.
  • Seamless orchestration among several network components using ONAP, ETSI OSM, Cloudify, etc
  • Workloads (VM and containers) and data center management by implementing OpenStack, Azure Stack, Amazon S3, CloudStack, Kubernetes, etc. Containers are getting widely adopted due to its features like faster instantiation, integration, scaling, security and ease in management.

The next thing that will disrupt the data center is the adoption of edge architecture. Edge computing will bring a mini data center closer to where data is going to generate by devices like smartphones, industrial instruments, and other IoT devices. This will add more endpoints before data gathered by the central data center. But, it will come up with an advantage that maximum computing will be done at the edge that will help to reduce the load on network transmission resources. Adding to this hyperconvergence can be used at edge nodes to bring simplification in the required mini data center.

[eBook] 5G Architecture: Convergence of NFV & SDN Networking Technologies.

Download ebook to know about technologies behind 5G network and status of adoption of 5G along with key insights into a market in this ebook.

Mobile Edge Computing (MEC) is a core project maintained by ETSI is emerged at an edge computing model to be followed by telecom operators. ETSI is maintaining and working on innovations to improvement in delivering core network functionalities using MEC and guiding vendors and service providers.

Apart from edge computing, network slicing is new architecture introduced in 5G that will be having an impact on how data centers are designed for particular premises, dedicated for specific cases like industrial IoT, transportation, sports stadiums, etc.

Data Center Performance for High-Speed Networks

In this transforming age, high amount of data will transfer between devices and data center as well as between data centers too. As low latency and high bandwidth is required by new use cases, it is important to obtain higher performance from the data center. It is not possible to achieve such paramount performance with legacy techniques and adding more capacity to data centers.

With the surge of data tsunami in last few years, data center technology vendors came up with new inventions and communities formed to address performance issues raised by different types of workloads.

One of the techniques which has been significantly utilized in new age data centers is to offload some of the CPU tasks to network or server interconnecting switches and routers. Let’s take an example of network interface card (NIC), which used to connect server to network components of data center has become a SmartNIC, offloading processing task that the system CPU would normally handle. SmartNICs can perform network intensive functions like encryption/decryption, firewall, TCP/IP and HTTP processing.

Recently, Futuriom conducted a Data Center Network Efficiency survey targeted to 2018 IT professionals about their perceptions and views on data center and networks. Apart from virtualizing network resources and workloads, for efficient processing of data for high-speed networks, use of SmartNIC and process offloads techniques have emerged as top interest for IT professionals. This depicts businesses are more relying on smart techniques which can save costs along with notable performance improvement in a data center for faster networks.

Workloads accelerators like GPUs, FPGAs, SmartNICs are widely used in current enterprise and hyperscale data centers to improve the data processing performance. These accelerators interconnect with CPUs for generating faster processing of data and require extremely lower latency for transmitting data back and forth from CPU of server. Very recently, to address the high speed and lower latency requirement between workload accelerators and CPUs, Intel along with leading tech giants like Alibaba, Dell EMC, Cisco, Facebook, Google, HPE and Huawei, have formed an interconnect technology called Compute Express Link (CXL), that will improve performance and removes the bottlenecks in computation-intensive workloads for CPUs and purpose-built accelerators. CXLs focused to create high speed, low latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking. CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. Source

NVMe is another interface introduced by NVM express community. It is a storage interface protocol used to boost access to SSDs in server. NVMe can minimize CPU cycles from applications and handle enormous workloads with lesser infrastructure footprints. NVMe has emerged as key storage technology and had a great impact on businesses, which are dealing with vast amount of fast data particularly generated by real-time analytics and emerging applications.

Automation and AI

Agile 5G networks will result in the growth of edge compute nodes in network architecture to process data closer to endpoints. These edge nodes or mini data centers are going to sync up with a central data center as well as interconnected to each other. For operators, it will be a task ahead to manual set up several edge nodes. Moreover, those edge nodes will regularly need initial deployment, configuration, software maintenance and upgrades. In the case of network slicing, there could be a need to install/update VNFs for particular tasks for devices in the slice. It is not possible to perform these tasks manually. At this point, automation comes into the picture where operators to get central dashboard at data center to design and deploy configuration for edge nodes.

Technology businesses are demonstrating or implementing AI and Machine learning in application level for enabling auto responsiveness. Like for example use of Chatbots for a website. Much of the AI is applied for data lake to generate insights from self-learning AI-based system. Such autonomous capabilities will be required by data center. AI systems will be used for monitoring server operations for tracking activities meant for self-scaling for sudden demand for compute or storage capacity, self-healing and notifying from breakdowns due to external attack or catastrophic situations, end-to-end testing of operations etc. Already tech businesses have started offering solutions for each of the use cases. For example, a joint AI based integrated infrastructure offering by Dell EMC Isilon and NVIDIA DGX-1 for self-scaling at data center.


New architecture and technologies are introduced with a revolution in a network. Most of the infrastructure turned into software centric as a response to the growing number of devices and higher bandwidth. Providing lower latency up to 10 microseconds is a new challenge for operators to enable new technologies in the market. For this to happen, data center need to complement the higher broadband network. It forms the base digital innovation to happen in future.

Sagar Nangare

Digital Strategist at Calsoft Inc.
Sagar Nangare is technology blogger, focusing on data center technologies (Networking, Telecom, Cloud, Storage) and emerging domains like Edge Computing, IoT, Machine Learning, AI). He is currently serving Calsoft Inc. as Digital Strategist.

Leave a Reply

Your email address will not be published. Required fields are marked *