Proper attention to networking components associated with hyper-converged architectures can not only ease deployment, but improve performance and flexibility.
Of the three elements — compute, storage and networking — within a hyper-converged architecture, network infrastructure components are the most overlooked. Often, little of the implementation planning time is devoted to the various network infrastructure components and how nodes will interconnect. In most organizations, the tendency is to use the least-expensive commodity networking cards and switches. But IT planners need to pay special attention to networking, as it is potentially a more critical factor to hyper-converged architecture design than any other component.
Calsoft Whitepaper: Amalgamation of 3G Mobile Services, Cloud Services & Storage Technology
Why is converged networking so important?
Hyper-converged architectures consist of a series of individual servers called nodes that are clustered together. But each node has its own compute that is not shared between nodes. Most hyper-converged architectures also leverage flash for the storage tier. Although this storage is shared flash, it typically provides more than enough performance. Networking is what sews the series of nodes together so they can be clustered. Since clustered nodes are shared completely, a problem with a network card, the switch or the connecting cables will impact the performance of the entire cluster.
Understanding hyper-converged I/O
The network is one of the few areas where specific performance guarantees can be made to an application running in a hyper-converged architecture.
Most hyper-converged architectures isolate a virtual machine (VM) to a particular host, but aggregate storage across the nodes as described above. In these systems, new writes are segmented — typically by the number of nodes — as they are sent to the flash tier, and a parity bit is created for data protection. Each segment is then written to the switch and across the network to each node in the cluster. That’s just one I/O operation; imagine the network traffic when 30,000 to 50,000 I/Os are processed per second. In addition to the storage I/O, there is the normal inter-server and user-to-VM I/O traffic.
IT planners need to ensure that network infrastructure components can deliver the performance and bandwidth required by the combined I/O load of the hyper-converged infrastructure. In addition, they need to verify the network provides capabilities to control how that bandwidth is allocated. The network is one of the few areas where specific performance guarantees can be made to an application running in a hyper-converged architecture.
Why cable hyper-converged architectures?
Data centers move to hyper-converged architectures to increase IT flexibility so they can better respond to the needs of the business. As the business scales, and new applications and workloads are added, IT simply needs to add more nodes. The hyper-converged software does a good job of integrating the new node, but how well does the physical layer accommodate it? Is the cable infrastructure preset so that nodes can truly be just plugged in, or does a new node require a work order?
The various network infrastructure components need special attention in a hyper-converged architecture. They need to be reliable; use quality, structured cabling; and provide high-performance. The network is the literal foundation of the hyper-converged architecture. If it is not enterprise-class, the hyper-converged architecture will never meet expectations.
Calsoft Storage Expertise
Leveraging years of experience with Storage platforms, ecosystems, operating systems and file systems, Calsoft stands as pioneer in providing storage product R&D services to ISVs. Our service offerings enable storage ISVs/ vendors to quickly develop next generation storage solutions that can perform and cut across enterprise IT needs.