Getting a perspective of SAN Volume Controller HyperSwap

What’s there in a name? Well when it comes to HyperSwap, it is a complete giveaway. Whole game in the storage industry runs around high-availability. HyperSwap is just an extension to that. As as the name suggests, HyperSwap is some sort of “quick swap” / switch to another site in case of disaster. But before we look into this we will cover some basics of SVC and HA.

1. SVC: SAN volume controller virtualizes the storage behind it by allowing diverse kind of storage boxes to be connected behind it.

2. I/O Groups: SVC has about 8 nodes and has 4 I/O groups (2 nodes per I/O group). Concept of I/O groups is to have effective fail over through partner node. We will not go into much details of how failover etc is managed. It’s important to know that the disks are visible through both the node in an I/O group. In general one one I/O group is mapped for access to a specific volume.

High availability solutions with SVC:

a. Metro Mirroring: Used for synchronous copy of I/O coming to a volume to another volume. Copy is not successful on the host until copy to second volume is successful. This happens from storage level alone. To the host the second volume is presented as read only volume. In case of a disaster happening, access is provided through the second volume. For that to happen the read only property of the volume will have to be changed.

b. Host side multipathing: For the host each I/O group node presents itself as a path to the storage device. So if there are 2 HBA ports on the host then the host will see 4 paths to the storage device. This is illustrated in the below mentioned diagram. Understanding the above is important because these paths to the storage or connections to a specific node plays an important role in hyper-swap (from storage perspective).

c. Host side clustering: Two or more hosts together create a fail-over relationship where each disk is mapped to all the hosts. In case of host failure the redundant host takes over the application and continues the I/O.

Disaster Recovery site (DR site): If for some reason, everything fails on a site an alternate site is maintained with more or less the same configuration. Using metro mirror / global copy with change volume there is a continuous data replication happening to DR site from the main site. In worst case scenario where the who site fails, customer has an option to switch back to DR site.

So how does HA picture look as of now?

1. There is redundancy on the host side with multiple hosts for failover.

2. There is redundancy on the SAN side with multiple HBAs connecting to SAN.

3. There is redundancy on the storage side with multiple I/O groups and multiple nodes in the same I/O group providing failover.

4. There is even bigger redundancy on the site side, where we have another DR site that can be accessed in the event of total failure.

But all of these cases would still cause some outage, if some disaster happens on the storage side. That’s where another solution “enhanced stretch cluster”comes into picture. Its a precursor to hyperSwap. 

Enhanced Stretch cluster(ECS): In case of stretch cluster each SVC I/O group is divided between two sites. In this case hosts will see preferred paths from the production site and the non preferred from the alternate site.

Challenge happens in the case where production site is gone and you are left with DR site. In this case access is available to only one node. This means that there is no redundancy available at the DR site.

So now the current status is that the ESC provides only a limited high-availability. A solution is required that will provide complete redundancy on the DR site as well. That’s where hyper-swap comes into picture.

Hyper-swap:

It uses the existing infrastructure of stretch cluster / metro mirror to great affect and provides I/O group node redundancy on the DR site as well. Hyper-swap stretches the SVC cluster in a real sense and places full I/O group on each site (instead of placing nodes). Instead of two node getting stretched on two side now 4 nodes or 2 I/O groups are placed on each site.

From the host perspective, two Iogrs are mapped to a single volume with paths to the main site as preferred paths and to DR site as non preferred. The volume movement to the DR site is done using something called NDVM (Non disruptive volume movement).

But that’s not how it looks under the hood. Under the hood its another play altogether. In the backend there are two volumes at each site. As I/O goes to the preferred path nodes, a synchronous copy is created on the DR site volume.The volume on the DR site is not mapped to the host.

So what happens if primary I/O grp becomes unavailable?

HyperSwap provides failover to another site in disaster within 30 seconds of time which is well within the application layer timeout for most of the critical applications.

If for some reason the I/O group where the primary vdisk exists is gone then paths from second I/O grp take over and SVC internally manages to use the secondary vdisk to process the data. For the host nothing changes. There is some latency but no I/O loss is suffered.

To know more email: marketing@calsoftinc.com

Contributed by: Himanshu Sonkar|Calsoft Inc.

 
Share:

Related Posts

Fine-Tuning GenAI - From Cool Demo to Reliable Enterprise Asset

Fine-Tuning GenAI: From Cool Demo to Reliable Enterprise Asset

Generative AI (GenAI) is quickly moving from experimentation to enterprise adoption. It can generate text, visuals, even code, but the real value emerges when these models are…

Share:
VMware to AWS Migration - 3 Technical Approaches

VMware to AWS Migration: 3 Technical Approaches That Work

Picture this: your IT team is staring at a renewal notice from VMware. Costs are higher than expected, bundles force you into features you don’t use, and…

Share:
Gen AI in Digital Product Engineering

How Gen AI is Transforming Digital Product Engineering Companies

Explore how Generative AI is reshaping digital product engineering companies by driving innovation, accelerating development, and improving customer experiences. Learn its role in modernizing workflows and building competitive advantage.

Share:
From Bottlenecks to Breakthroughs - Building Synthetic Data Pipelines with LLM Agents - Blog banner

From Bottlenecks to Breakthroughs: Building Synthetic Data Pipelines with LLM Agents

Recently, we collaborated with a team preparing to fine-tune a domain-specific Large Language Model (LLM) for their product. While the base model architecture was in place, they…

Share:
From Reactive to Proactive AI Predictive Testing in Software Development - Blog Banner

From Reactive to Proactive: AI Predictive Testing in Software Development

The old rhythm of software testing—write code, run tests, fix bugs—doesn’t hold up anymore. Continuous releases, sprawling microservices, and unpredictable user behavior are stretching QA teams beyond…

Share:
Applications of Large Language Models in Business - Blog Banner

Applications of Large Language Models in Business 

Enterprises today are buried under unstructured data, repetitive workflows, and rising pressure to move faster with fewer resources. Large Language Models (LLMs) are emerging as a practical…

Share: