In addition to reducing CAPEX and requirement for low latency/high bandwidth network, obtaining performance from NFV infrastructure elements is critical for service providers. This post focuses on a case study of evaluation of NFV architecture components i.e. VNFs (virtual network functions) and VIM (virtual infrastructure manager) to deliver best-in-class performance to end users and offer a valid approach to active testing.
Why performance benchmarking matters
Most of the CSPs are evaluating or demonstrating the readiness of 5G network in network. Some service providers have already launched 5G in selected cities. As telecom networks go through a transition, NFV, the core technology driving the 5G implementations is maturing due to active contribution by supporting communities and vendors who are using it to build test cases or solutions to deliver maximum potential benefits for a network.
Now, even with all required technologies and reference models in place to build 5G network, CSPs are still concerned with the end-to-end performance of network services to deliver the best services to end users. And it will be even more important for them because users will be more engaged with connected devices to explore benefits from new age technologies like internet of things, augmented or virtual reality, autonomous cars, etc. So, the live performance, as well as the development environment, becomes even more crucial, especially when utilizing network slicing feature supported in 5G; that will require to provide performance for sliced networks having a different end to end QoS (quality of services) and QoE (quality of experience) characteristics/measurements. Like low latency, high throughput, less packet loss, etc.
There are few challenges associated with NFV when testing its performance. Currently, NFV environments typically built with elements (VNFs, MANO, NFVi) devised by different types of vendors. Like service providers have choices for MANO layers as ONAP, ETSI OSM; VIM can be any proprietary solution or widely used OpenStack; VNFs from different vendors incorporated or chained to build network services and NFVi constructed using different hardware platform vendors. Such an environment is highly complex and has a major impact on the performance of network services and agility to be delivered by service provider.
Service providers must test and benchmark the performance of NFV elements. As the VNFs are a critical part of NFV performance of VNFs make the difference in overall NFV operations which have direct impact on network. Mostly VNFs (virtual network functions) come with different resources requirements because they different characteristics and are provided by different vendors even if all of them share common a NFV infrastructure (NFVi). Apart from VNFs, performance and functionality of VIMs (virtual infrastructure manager) needs to be benchmarked for resources and infrastructure requirements from diverse set of VNFs.
There are a few considerations worth making to achieve high performance and throughput from NFV elements, including:
- Performance must be monitored and tested to hunt down any errors
- Provisions in place to quickly get back to normal operations in case of performance degradation
- Performance testing carried out in the design phase to provide infrastructure and resource requirements by VNFs.
- Validation checks are also needed after deployment to ensure whether allotted resources are meeting the requirements and the VNF delivers the expected performance.
- Dev-ops or a CI/CD approach should be integrated to actively keep track on performance measures and fix patches in runtime.
A case study
At Calsoft, we have made a demo focusing on functionality testing and performance benchmarking of OpenStack-based VIM used for VNF deployment and performance testing.
Here are tools and frameworks used as below:
- OPNFV Functest framework for functionality validation
- OPNFV Yardstick for performance benchmarking and health tests
- VNFs used for OpenStack based platform validation: Clearwater Metaswitch IMS, OAI EPC, Juju EPC, and Vyos Router
- Perform end-to-end solution testing with commercially available vEPC VNFs on the cloud.
- We ran over 2,500 test cases from functest test suits and achieved a 95 percent success rate. These tests included OpenStack based VIM testing as well as open source NFVs (vims,vyous-vrouter, juju-epc)
- 90 percent passrate for OPNFV test case for VNFs: vIMS, vEPC and vyos router
The full results are available free with registration here.
Latest posts by Sagar Nangare (see all)
- Serverless at the Edge: Resolving Resource Utilization Concerns - January 21, 2019
- Where the Cloud Native Approach is Taking NFV Architecture for 5G - December 12, 2018
- Is the Cloud Next Thing for Long Term Data Retention or Archival? - December 6, 2018