Performance Benchmarking of OpenStack-based VIM and VNFs for NFV Environments

This contributed article was originally published by OpenStack Superuser. We are re-publishing on our blog.

In addition to reducing CAPEX and requirement for low latency/high bandwidth network, obtaining performance from NFV infrastructure elements is critical for service providers. This post focuses on a case study of evaluation of NFV architecture components i.e. VNFs (virtual network functions) and VIM (virtual infrastructure manager) to deliver best-in-class performance to end users and offer a valid approach to active testing.

Why performance benchmarking matters

Most of the CSPs are evaluating or demonstrating the readiness of 5G network in network. Some service providers have already launched 5G in selected cities. As telecom networks go through a transition, NFV,  the core technology driving the 5G implementations is maturing due to active contribution by supporting communities and vendors who are using it to build test cases or solutions to deliver maximum potential benefits for a network.

Now, even with all required technologies and reference models in place to build 5G network, CSPs are still concerned with the end-to-end performance of network services to deliver the best services to end users. And it will be even more important for them because users will be more engaged with connected devices to explore benefits from new age technologies like internet of things,  augmented or virtual reality, autonomous cars, etc. So, the live performance, as well as the development environment, becomes even more crucial, especially when utilizing network slicing feature supported in 5G; that will require to provide performance for sliced networks having a different end to end QoS (quality of services) and QoE (quality of experience) characteristics/measurements. Like low latency, high throughput, less packet loss, etc.

Challenges

There are few challenges associated with NFV when testing its performance. Currently, NFV environments typically built with elements (VNFs, MANO, NFVi) devised by different types of vendors. Like service providers have choices for MANO layers as ONAP, ETSI OSM; VIM can be any proprietary solution or widely used OpenStack; VNFs from different vendors incorporated or chained to build network services and NFVi constructed using different hardware platform vendors. Such an environment is highly complex and has a major impact on the performance of network services and agility to be delivered by service provider.

Service providers must test and benchmark the performance of NFV elements. As the VNFs are a critical part of NFV performance of VNFs make the difference in overall NFV operations which have direct impact on network. Mostly VNFs (virtual network functions) come with different resources requirements because they different characteristics and are provided by different vendors even if all of them share common a NFV infrastructure (NFVi). Apart from VNFs, performance and functionality of VIMs (virtual infrastructure manager) needs to be benchmarked for resources and infrastructure requirements from diverse set of VNFs.

Considerations

There are a few considerations worth making to achieve high performance and throughput from NFV elements, including:

  • Performance must be monitored and tested to hunt down any errors
  • Provisions in place to quickly get back to normal operations in case of performance degradation
  • Performance testing carried out in the design phase to provide infrastructure and resource requirements by VNFs.
  • Validation checks are also needed after deployment to ensure whether allotted resources are meeting the requirements and the VNF delivers the expected performance.
  • Dev-ops or a CI/CD approach should be integrated to actively keep track on performance measures and fix patches in runtime.

A case study

At Calsoft, we have made a demo focusing on functionality testing and performance benchmarking of OpenStack-based VIM used for VNF deployment and performance testing.
Here are tools and frameworks used as below:

  • OPNFV Functest framework for functionality validation
  • OPNFV Yardstick for performance benchmarking and health tests
  • VNFs used for OpenStack based platform validation: Clearwater Metaswitch IMS, OAI EPC, Juju EPC, and Vyos Router
  • Perform end-to-end solution testing with commercially available vEPC VNFs on the cloud.

Results

  • We ran over 2,500 test cases from functest test suits and achieved a 95 percent success rate. These tests included OpenStack based VIM testing as well as open source NFVs (vims,vyous-vrouter, juju-epc)
  • 90 percent passrate for OPNFV test case for VNFs: vIMS, vEPC and vyos router

The case study is available free with registration here.

[Tweet “Performance Benchmarking of OpenStack-based VIM and VNFs for NFV Environments ~ via @CalsoftInc”]

 
Share:

Related Posts

Fine-Tuning GenAI - From Cool Demo to Reliable Enterprise Asset

Fine-Tuning GenAI: From Cool Demo to Reliable Enterprise Asset

Generative AI (GenAI) is quickly moving from experimentation to enterprise adoption. It can generate text, visuals, even code, but the real value emerges when these models are…

Share:
VMware to AWS Migration - 3 Technical Approaches

VMware to AWS Migration: 3 Technical Approaches That Work

Picture this: your IT team is staring at a renewal notice from VMware. Costs are higher than expected, bundles force you into features you don’t use, and…

Share:
Gen AI in Digital Product Engineering

How Gen AI is Transforming Digital Product Engineering Companies

Explore how Generative AI is reshaping digital product engineering companies by driving innovation, accelerating development, and improving customer experiences. Learn its role in modernizing workflows and building competitive advantage.

Share:
From Bottlenecks to Breakthroughs - Building Synthetic Data Pipelines with LLM Agents - Blog banner

From Bottlenecks to Breakthroughs: Building Synthetic Data Pipelines with LLM Agents

Recently, we collaborated with a team preparing to fine-tune a domain-specific Large Language Model (LLM) for their product. While the base model architecture was in place, they…

Share:
From Reactive to Proactive AI Predictive Testing in Software Development - Blog Banner

From Reactive to Proactive: AI Predictive Testing in Software Development

The old rhythm of software testing—write code, run tests, fix bugs—doesn’t hold up anymore. Continuous releases, sprawling microservices, and unpredictable user behavior are stretching QA teams beyond…

Share:
Applications of Large Language Models in Business - Blog Banner

Applications of Large Language Models in Business 

Enterprises today are buried under unstructured data, repetitive workflows, and rising pressure to move faster with fewer resources. Large Language Models (LLMs) are emerging as a practical…

Share: