Software Defined Networking (SDN) and Network Function Virtualization (NFV) are vital technology innovations invented in last few years. Both of these technologies are collectively revolutionizing the current IT and telecom network infrastructure by enabling telco operators to make use of virtual machines in place of network equipment and further program the network for gaining full control over it and introduce lucrative services for consumers.
For upcoming 5G architecture, SDN and NFV are proving to be cornerstone technologies giving CSPs multiple benefits to deploy networks. Growing number of connected things are pushing CSPs for more communication services and process huge multiple types of data (videos, images). Such expectations from end users and enterprises are adding more complexities in network. As NFV has matured over last few years, CSPs have realized the huge potential of simplifying network operations, roll out new services in minutes along with added benefits such as rapid delivery, flexible deployment, efficient operation, reduction time for management (deployment, configuration, and upgrade) of network equipment, consolidated on single platform taking lesser space, power and cost expenditure.
Automation in NFV
Apart from flexibility, scalability, agility and cost efficiency, a prominent feature of NFV is automation, a key selling point for NFV. In early days, to deploy a new service within network, CSPs used to take several months and it was prone to errors as configuration processes had manual intervention which led to costly and delayed service launches. In telco NFV use case, data centres are spread across multiple regions. In such distributed telco environment, it has become difficult to manage physical resources at every data centre which may incur huge manual effort as well as costs.
With software driven and highly programmable nature of NFV, CSPs can now able to automate workflows and operations like infrastructure management, resources management and network services lifecycle throughout the network. Using automation capability, CSPs can reduce OPEX considerably, offering faster time-to-market for services.
NFV Infrastructure Deployment Automation
NFV technology vendors are leveraging benefits in their respective NFV offering solutions based on their native components dedicated to several key functions required to build NFV infrastructure. Still, most of the vendors are facing critical challenges in automation at NFV infrastructure level.
- Native components offered by NFV infrastructure vendors need to deploy automatically on underlying infrastructure. Amount of time required to deploy platform is considerable.
- Need to achieve high performance of NFV infrastructure with minimum physical consumption. Vendors need to perform up-to-mark configuration management to meet agility offered by NFV.
- In a distributed NFV environment, vendors need to manually configure and manage the client data centres. Remote clients platforms need to automatically deploy, configure and manage at central NFV server.
- Onboarding VNFs can has challenges for NFVi. Including:
- VNFs from multiple suppliers should run on any NFV infrastructure. Currently, configuration of VNFs takes considerable efforts as tweaks are needed at VNF side as well as on NFV infrastructure to make it interoperable.
- CSPs have their own policy based workflow and automated processes defined to operate and manage NFV infrastructure. Onboarding VNFs can raise concern for predefined workflows.
- Major testing cost can arise while evaluating functionalities of VNFs with existing orchestrator, VIM and VNF managers.
- Inclusion of VNFs in NFVi causes capacity sizing or resource scaling within NFVi. Performance of VNFs needs to be properly assessed to ensure accurate capacity sizing and appropriate resource scaling.
These challenges push vendors to focus on performance enhancement of services within NFV infrastructure and enable operational intelligence using automating deployments. A recent advancement in networking after SDN, NFV & SD-WAN is, Intent Based Networking (IBN) which will accumulate machine learning capabilities for network operations to take networking infrastructure to next level. IBN improves network availability and agility by translating high level business policies as input into network configuration to follow and further applies appropriate changes to network infrastructure. We can expect intent based networking will surely use within NFV operations. Prior to that, challenges at NFVi level need to resolve and make NFV ready for this next level intent based networking.
Technologies and Techniques for Performance Enhancement of services within NFV Infrastructure
- NUMA (Non-Uniform Memory Access)
- Huge Pages
- CPU Spinning for NUMA Optimization
NUMA (Non-Uniform Memory Access)
In traditional Symmetric Multiprocessor (SMP) based systems, all CPUs in system accesses same memory. As inclusion of more CPUs, memory access for every CPUs are flooded with request, results in degraded performance. With NUMA design, memory is divided into multiple nodes which are local to each of CPUs in system. All nodes are connected together using interconnect so that all CPUs can still access all memory. NUMA approach raises performance of server system considerably. NUMA approach has major role in NFV infrastructure for performance enhancement of services.
Scheduler in operating systems is used to allocate time slot in processor core for execution of thread running in system. In multicore processor systems, to balance the workload between all cores in system, threads are moved across multiple cores. But in NUMA based design, such functionality causes memory to become distant rather than local due to thread movement across multiple cores. With CPU pinning technique, processes or thread are tightly configured with one or multiple cores. With this configuration, scheduler can schedule threads to one of the nominated cores. In a NUMA configuration, if specific NUMA memory is requested for a thread, this CPU setting will help to ensure that the memory remains local to the thread.
A physical memory is divided into contiguous regions called as pages which are entirely access by system instead of individual bytes of memory. Each time process accesses memory, it looks for a table called at Translation Lookaside Buffer (TLB) containing most recent entries of physical to virtual address mappings. When in some case where mapping does not exists in TLB, system looks for entire list of available pages to determine address mapping. This leads to performance degradation. Therefore, it is preferred to optimize TLB to ensure process can avoid misses. Typical page size in x86 systems is 4KB. With huge pages technique, page size is increased so that virtual to physical address mappings, larger amount large amount of memory mapped into TLB which speeds up memory access time for processes. In NFV architecture, HugePages plays key role to enhance performance. Page translation between hypervisor and host operating system create additional overhead. By using hugepages in host OS this overhead can be reduced.
SR-IOV (Single Root I/O Virtualization) is an extension to the PCI Express (PCIe) specification where in it bypasses hypervisor and virtual switch to provide interrupt free operations. SR-IVO results in high packet processing for data transfers and overall performance.
DPDK accelerated Open vSwitch (OVS-DPDK)
DPDK is a set of libraries and user-space poll mode drivers (PMDs) which are used for faster packet processing. DPDK enables applications to process its own packets directly from NICs. DPDK overcomes the latency issue and allows more packets to be processed within systems. OVS-DPSK is Open vSwitch bundled with DPDK helps in performance improvement of OVS while maintaining its core functionality. With OVS-DPDK packet switching can be done directly from physical NICs to applications through direct memory access to physical NIC. OVS-DPDK replaces standard kernel datapath with a DPDK-based datapath, creating a user-space Open vSwitch on the host, which uses DPDK internally for its packet forwarding.
Calsoft Use Cases
At Calsoft, we have been involved in NFVi deployment automation and performance enhancement for our clients using
- VMware vCloud (Request success story)
- VMware Integrated Openstack (VIO) (Request success story)
- Red Hat Openstack Platform (RHOSP) (Request success story)
We helped our clients to achieve below benefits for their NFVi.
- Drastically reduced the time taken for deployment of the NFV infrastructure.
- Drastically reduced the professional service cost.
- Besides production environment, provided solution was also very useful in setting up lab for demos, PoC and VNF testing environment.
- Eliminated human errors by using end to end automation.
- Helped in performance enhancement of the system.
- Enhanced UI functionalities for performance specific configuration management.
Latest posts by Sagar Nangare (see all)
- NVMe over Fabrics: Fibre Channel vs. RDMA - November 15, 2018
- Performance Benchmarking of OpenStack-based VIM and VNFs for NFV Environments - November 13, 2018
- DevOps in NFV: Assuring Health of Service Chains and 5G Network Slices - October 18, 2018