Evolution Of Data Centers

To understand what a datacenter is, first let us recall what a computer is. A computer consists of three major components—CPU (computation), Memory (RAM) and Disk (storage). Similarly, a datacenter used to have three discrete and interconnected components—Servers (for compute), Storage Array (for storage), and Network Switches (for connectivity). These components evolved in time and thus contributed to the evolution of datacenters.

Datacenter Components

Server

The most important part of a server/computer is its processor. The evolution of CPU/processors happened over its number of COREs (logical processor) to represent a physical processor. From single core, dual core, to multi core—that is the journey of processors.

Storage

Storage evolved from a 1.4 MB Floppy (as I know), 700 MB CD to 2 TB USB, and 30 TB SSDs. The protocols to connect to these storages started from IDE, SCSI, SAS to SATA, PATA to today’s NVMe.

Networking

Networking evolved from 10 MBPS to today’s 100 GBPS. The devices changed from hubs, switches to routers, firewalls, and so on. The topologies evolved from LAN to WiFi and WAN.

Virtualization

All these discrete components were playing their vital roles and then came the era of virtualization. In virtualization, the physical components were divided into logical ones and then represented to an application as multiple physical components.

All the building blocks (hardware, kernel, user space) of a computer were virtualized as hardware virtualization, memory virtualization, software virtualization, network virtualization, application virtualization, storage virtualization, and so on. With virtualization, things that were seemingly impossible earlier became possible.

A single computer can play multiple operating systems. All operating systems boot at the same time as a virtual machine. All virtual machines run as an application on a single kernel called hypervisor. A set of virtual machines forms a cluster to serve a single service or application. A single storage disk shared across multiple servers forms a storage cluster.

Converged Infrastructure:

The idea of component virtualization, their integration, and management gave rise to a term called Converged Infrastructure OR CI. In CI, the various components are grouped together to form a single CI-Node. Datacenter administrators get a single management utility/interface to manage all the components of CI. This discrete component management via a single interface allowed CI to serve features such as scale out, scale up, high availability, and so on.

Hyper-Converged Infrastructure:

Then came the era of software-defined components. Software-defined network, software-defined storage, software-defined compute, which then formed software-defined infrastructure. The term “software-defined” implies that the services expected out of physical devices were getting programmed and performed through a piece(s) of code running on a single node. This reduced the need of discrete components and gave rise to a term called Hyper Converged Infrastructure or HCI.

In HCI, all the programs of discrete components are clubbed together to form a single HCI-Node. As the components are software-defined and integrated together in a node, managing them through an external interface is easy. What is difficult is achieving feasibility of scale out and scale up.

In a nutshell, the clubbing of individual components of a datacenter to form a single node is called converged infrastructure. Clubbing of software-defined components to form a node is called HCI.

The next era will be of hybrid infrastructure which will be a mash-up of hardware-based and software-based components.

 

 
Share:

Related Posts

Fine-Tuning GenAI - From Cool Demo to Reliable Enterprise Asset

Fine-Tuning GenAI: From Cool Demo to Reliable Enterprise Asset

Generative AI (GenAI) is quickly moving from experimentation to enterprise adoption. It can generate text, visuals, even code, but the real value emerges when these models are…

Share:
VMware to AWS Migration - 3 Technical Approaches

VMware to AWS Migration: 3 Technical Approaches That Work

Picture this: your IT team is staring at a renewal notice from VMware. Costs are higher than expected, bundles force you into features you don’t use, and…

Share:
Gen AI in Digital Product Engineering

How Gen AI is Transforming Digital Product Engineering Companies

Explore how Generative AI is reshaping digital product engineering companies by driving innovation, accelerating development, and improving customer experiences. Learn its role in modernizing workflows and building competitive advantage.

Share:
From Bottlenecks to Breakthroughs - Building Synthetic Data Pipelines with LLM Agents - Blog banner

From Bottlenecks to Breakthroughs: Building Synthetic Data Pipelines with LLM Agents

Recently, we collaborated with a team preparing to fine-tune a domain-specific Large Language Model (LLM) for their product. While the base model architecture was in place, they…

Share:
From Reactive to Proactive AI Predictive Testing in Software Development - Blog Banner

From Reactive to Proactive: AI Predictive Testing in Software Development

The old rhythm of software testing—write code, run tests, fix bugs—doesn’t hold up anymore. Continuous releases, sprawling microservices, and unpredictable user behavior are stretching QA teams beyond…

Share:
Applications of Large Language Models in Business - Blog Banner

Applications of Large Language Models in Business 

Enterprises today are buried under unstructured data, repetitive workflows, and rising pressure to move faster with fewer resources. Large Language Models (LLMs) are emerging as a practical…

Share: