Use of Docker (Linux Containers – LXC) in Scalability & Performance Testing for NAS Products

Testing of Network Attached Storage (NAS) Arrays is a challenging subject & many OEMs face a daunting task when it comes to performing non-functional testing such as Performance. Another challenge is to test how scalable their product is in-terms of maximum simultaneous connections. Both these kind of testing needs thorough knowledge of not just the filesystem protocol, but the backend filesystem running on the NAS filer as well as the knowledge of client side functions to measure the performance as exhibited by the NAS sub-system.

Scalability testing requires gamut of filesystem clients of different kernel version supporting different protocol version & also multiple such clients supporting similar kernel/protocol version. This raises the IT budget for procuring Bare Metal or Hypervisor served VMs, thus increasing the input cost for Product Development team to build, test & release the product under development.

Calsoft Whitepaper: Implicit replication in a Network File Server

This whitepaper discusses the conditions that replication strategies in network file servers need to meet in order to gain widespread acceptance. The paper also describes how these considerations were taken into account in the design of HA-NFS.

Download

With the re-invention of Linux Containers & its commercially viable derivatives such as Docker Containers, the same scalability & performance testing can be achieved in much lesser foot print & Cost to build the test lab environment.

Docker offers all standard features today that of a Private Cloud & you can build stack of applications using Docker. There is a complete Docker ecosystem to achieve this. Docker Registry for standard images, Docker Hub for storing your private images, Docker Volume Driver plug-ins to work with different types of Storage sub-system such as the Netshare Plugin for AWS EFS Storage, NFS & SMB File servers, Docker Compose, Dockerfile for writing “Go” commands & instructions to build your custom container image & above all Docker Cloud that offer orchestration for connecting your favourite Cloud Service Provider & deploying Instances (VMs) & ultimately Docker Engine & Containers on these instances to create your own Dev/QA/Test/Ops environment.

For scalability & performance testing of a standard NAS filer & its filesystem protocol offering, you can make use of the Netshare Plugin for NFS & SMB. Once installed & run in the background, you can map & mount a standard NFS export or a SMB share from the remote NAS filer to the local Docker container running in its own shell.

Using Docker CLI, Docker Compose & Dockerfile, you can build a customize Ubuntu/CentOS image consisting all necessary dependencies for running a SMB/NFS Client & run in parallel multiple such containers to connect to the NAS sub-system protocol server & test the scalability aspect of the backend NAS.

You can keep all standard Synthetic Data Ingestion tools & utilities on the Docker Host filesystem & you can bind the host volume to the Container local volume thus exporting all standalone tools (FIO, IOZone, DBench, etc.) & workload scripts inside the Docker container(s) & exercise parallel execution of synthetic workload on the backend NAS filesystem & measure the performance stats on the NAS console as well as that what is observed on the individual containers and compute the client side combined stats to plot your performance graphs for varying load parameters such as Block size, File size, IO pattern (Sequential Vs. Random),  queue depth or outstanding IO, mix of Reads/Writes, Direct Vs. In-Direct IO, etc. and measure the performance variables – Latency, Throughput/IOPS to identify  the performance variation of your system under test.

To know more email: marketing@calsoftinc.com

Contributed by: Taizun Kanchwala| Calsoft Inc

Container Ecosystem Services

Calsoft has deep expertise in containerization of Storage and Networking products. With our in-depth understanding of various containerization technologies like Docker, Kubernetes, Apache Mesos and Coreos, we have helped ISVs to design and develop solutions in and around these technologies.

 
Share:

Related Posts

Fine-Tuning GenAI - From Cool Demo to Reliable Enterprise Asset

Fine-Tuning GenAI: From Cool Demo to Reliable Enterprise Asset

Generative AI (GenAI) is quickly moving from experimentation to enterprise adoption. It can generate text, visuals, even code, but the real value emerges when these models are…

Share:
VMware to AWS Migration - 3 Technical Approaches

VMware to AWS Migration: 3 Technical Approaches That Work

Picture this: your IT team is staring at a renewal notice from VMware. Costs are higher than expected, bundles force you into features you don’t use, and…

Share:
Gen AI in Digital Product Engineering

How Gen AI is Transforming Digital Product Engineering Companies

Explore how Generative AI is reshaping digital product engineering companies by driving innovation, accelerating development, and improving customer experiences. Learn its role in modernizing workflows and building competitive advantage.

Share:
From Bottlenecks to Breakthroughs - Building Synthetic Data Pipelines with LLM Agents - Blog banner

From Bottlenecks to Breakthroughs: Building Synthetic Data Pipelines with LLM Agents

Recently, we collaborated with a team preparing to fine-tune a domain-specific Large Language Model (LLM) for their product. While the base model architecture was in place, they…

Share:
From Reactive to Proactive AI Predictive Testing in Software Development - Blog Banner

From Reactive to Proactive: AI Predictive Testing in Software Development

The old rhythm of software testing—write code, run tests, fix bugs—doesn’t hold up anymore. Continuous releases, sprawling microservices, and unpredictable user behavior are stretching QA teams beyond…

Share:
Applications of Large Language Models in Business - Blog Banner

Applications of Large Language Models in Business 

Enterprises today are buried under unstructured data, repetitive workflows, and rising pressure to move faster with fewer resources. Large Language Models (LLMs) are emerging as a practical…

Share: