File System Testing – A Sneak Peak

File system is one of the most essential components in any storage appliance especially in NAS storage appliances. It plays a significant role in data reduction technologies such as; compression, de-dupe, thin provisioning, data integrity, security, data protection, and many more. Because of this tremendous dependency on File Systems; its performance and behavior has been a major focus area for most of the storage ISVs. This also prompts us to understand some of the basic rules while designing and planning the file system test executions. This not only increases the product quality but also turns out to be a good ROI.

What is file system and why it is important to storage stack?
File system is a software component managing on-disk layout of storage and facilitates I/O between user applications and the underlying storage subsystems and disks. File system intermediates the file-level I/Os at the top half and block-level I/Os at the bottom half layers in the stack. Also, File system plays a vital role when it comes to storage performance in particular, since most of the important tasks such as block allocation, de-allocation, metadata updates, integrity checks, de-dupe, compression are done at the file system level and most of them are latency bound. If poorly designed and configured, chances are I/Os will get throttled at file system layer, keeping overall functionality and performance at stake.There are different types of file systems available and they are integral part of storage software offered by different storage vendors. Some of the very familiar on disk file systems are Oracle ZFS (earlier Sun’s), Linux ext3/ext4, MS Windows NTFS/FAT32, Veritas VxFS, IBM GPFS, Redhat GlusterFS, Hadoop HDFS, etc.
Again, every file system is designed with specific objectives and no two file systems are equal and they are designed to serve various purposes. However, in principle every file system is designed to meet a common end goal, serve the I/Os of the user applications. So, understanding basic semantics behind a file system is not only worthwhile but also useful in understanding and implementing it’s successful test executions by formulating a good test strategy.File system testing considerations
Below are some of the important parts of a file system testing that should and must be considered for successful test evaluations and test executions in a storage software release cycle.

  • I/O Path Testin: One should focus on complete i/o path including block allocation, block de-allocation, in memory operations, cache semantics, disk I/Os etc.,
  • Metadata I/O path testing: Since metadata plays a very important role for file I/Os, it’s equally important to design test cases based on different meta I/O operations.
  • Data fragmentation testing: Designing and executing test cases around data fragmentation will provide a good insight as how file system behaves with different workloads when data is heavily or sparingly fragmented.
  • Meta fragmentation testing: Meta fragmentation often turns out to be I/O bottleneck and this might impede the overall performance of the file system. So, considering different meta fragmentation scenarios along with different data streams and workloads patterns help in evaluating file system behavior.
  • Data Dedupe testing: If the file system supports de-dedupe (which most of the commercially available file systems do as part of data reductions), it’s highly recommended to consider different scenarios by ingesting different dedupe ratios in the data streams which would closely mimic the customer representative data. Also, due considerations is to be given if the appliance is a generic storage appliance or backup storage appliance.
  • Data Compression testing: If file system compression logic is enabled, good to verify with many different compression logic that the file system supports and this would help to evaluate which one best suits with the file system offerings.
  • Data/meta Integrity check: Since maintaining data integrity is the first and foremost objective of any file system, it’s very much essential to design test plans around file system integrity check. All most all file systems has various integrity check mechanisms viz., ZFS uses checksum method internally to maintain integrity of each block allocated. There are many tools available to do the integrity check e.g., fsck is one of them which comes with Linux
  • RAID type consideration: In case file system offers different RAID levels, test plan should cover different RAID levels with different disk types (e.g., HDD vs SSD) and also need to consider different volume sector sizes and disk sector sizes ( e.g., 512 byes vs 4KB).
  • Stress and Load testing : File system will work fine and as expected under normal scenarios with usual workloads. However, this is not true in the real world as there are always extreme situations. So, it’s important to put file system under stress and load tests. So, consideration should be on good stress and load test plans and executions. Also, one can consider some of the most suitable tools such as Load Dynamics, iometer, vdbench, jetstress, etc., targeting different data streams and workload patterns and running them over longer period of time. Also, as part of stress testing, one should consider injecting negative test scenarios, e.g., removing one of the disks under heavy I/O and observing the file system and overall system behavior.
  • Performance testing: Everyone is concerned about performance and no one likes it if system under performs. A solid performance test plan around file system will help in analyzing file system behavior and overall storage system performance, since most of the time a file system’s poor performance is heavily taxed to overall system performance and hence eventually impacts the economics of scale in business. Again, performance testing is a vast area and care must be taken in considering specific and important criteria while evaluating file system performance testing and most importantly the testing efforts should not go haywire.

Conclusion
Hope this blog helped you understand the basics of file system testing. As a file system testing expert I recommend to have a clear understanding of the architecture and functional aspects of the file system before designing a test plan. Again, every file system is designed to cater some specific requirements along with basic objectives; care must be taken in considering all the aspects and requirements of the file system under test including right test environment, right tools and techniques.

To know more email: marketing@calsoftinc.com
Contributed by: Santosh Patnaik | Calsoft Inc

Santosh Patnaik

Santosh Patnaik

More than 14 years of solidIT experience in software development, customer support and quality assurance. Extensively worked in the areas of Enterprise Data Storage across NAS/SAN/DAS. As a subject matter expert, highly focused on Network storage, Filesystem, Storage performance in particular. Currently working as a QA architect at Calsoft.
Santosh Patnaik

Leave a Reply

Your email address will not be published. Required fields are marked *