Network Function Virtualization (NFV) aims to virtualize over generic servers the network functions that typically run in dedicated appliances, so that they acquire all the advantages of cloud applications — almost a must for today’s applications like 5G.
Since its conception in 2012 by the world’s leading telecom network operators through this white paper and motivated by the increasing costs of building networks with proprietary hardware appliances, NFV has been constantly evolving through European Telecommunications Standards Institute (ETSI) standards. Deployments have been growing slowly but surely, paving the way to a brighter future for the telecom industry. Since 2017, activities like the ETSI NFV Plugtests have been accelerating deployments by improving both interoperability and technology readiness.
In this first post in a three-part series, I’ll summarize results from ETSI’s second NFV Plugtests held in January 2018, where I participated representing Whitestack’s NFV solutions. It was an opportunity for vendors and open source communities to get together, assess interoperability and validate solutions against ETSI NFV specifications.
Before diving into the tests, let’s quickly review ETSI NFV Architecture, where we can see three fundamental divisions:
- NFVI + VIM: The physical compute, storage and networking resources that will be virtualized (NFV Infrastructure), and the software that manages their lifecycle (Virtual Infrastructure Manager). The VIM belongs to NFV MANO according to the ETSI Architecture, but since OpenStack and other VIM tools already solve VM lifecycle, actual MANO focus is on the higher layers.
- MANO: The Management & Orchestration software components (NFV Orchestrator and VNF Manager) that take care of the lifecycle of Network Services and Virtual Network Functions.
- VNFs: The actual network functions, comprised by one or more virtual machines (and containers, if generically speaking), that can be integrated together to build end-to-end (virtualized) Network Services.
So back to the Plugtests: many experienced VNF vendors and the most relevant NFVI, VIM and MANO providers of the industry met at the ETSI headquarters at Sophia Antipolis, France where we spent a whole week inside a big room going over a number of interoperability tests.
Of course, to make this big challenge possible (dozens of companies, some of them competitors, working together), ETSI organized things well in advance. They set up weekly group calls around four months beforehand, had a VPN in place, called the NFV Plugtests HIVE, so everyone could connect their solutions in advance (see the image below with the participants spanning the globe) and ensured everyone completed a ‘pre-testing’ process before the onsite week. A special thanks to the ETSI team for achieving a second time and for managing to get more testing done, with more features and in half the time compared to the first edition!
So, what was tested?
- Multi-VNF Network Services lifecycle (instantiation, manual scaling and termination) –> Two or more VNFs, from different vendors, in the same service.
- Multi-site/VIM Network Service deployments –> VNFs from the same Network Services in different data centers.
- Enhanced Platform Awareness features (SR-IOV, CPU Pinning, Huge Pages, etc.) –> performance boost for VNFs!
- End-to-end Performance Management –> to be able to grab metrics from and create thresholds on both the VIM and the VNFs.
- MANO Automatic Scaling (out/in) capabilities based on performance metrics from both VNFs and VIMs –> to add or remove VNFs/VDUs based on VIM/VNF metrics.
- End-to-end Fault Management –> events and alarms propagation to higher layers
- An optional API test track provided by ETSI to experiment with compliance testing on some specific interfaces (NFV SOL 002 for the Ve-Vnfm reference point, NFV SOL 003 for the Or-Vnfm reference point)
Even though our obvious objective was to run tests successfully, ETSI encouraged us to ensure marking results as ‘failed’ when interoperability did not work so that this rich feedback will influence NFV standards in a positive way.
We brought both our MANO Solution (WhiteNFV, based on Open Source MANO Release 3), and our VIM Solution for Cloud and NFV environments (WhiteCloud, based on OpenStack Pike distribution), so we had to find interoperability with other NFVI/VIM and MANO providers respectively. I’m glad to say that both performed pretty well!
About the author
Gianpietro Lavado is a network solutions architect interested in the latest software technologies to achieve efficient and innovative network operations. He currently works at Whitestack, a company whose mission is to promote SDN, NFV, cloud and related deployments all around the world, including a new OpenStack distribution.
This post first appeared on LinkedIn. Superuser is always interested in community content, get in touch at: editorATopenstack.org
- Takeaways from the first open multi-vendor NFV showcase - May 22, 2019
- 5G projects building strong use cases for Open Source MANO NFV - February 25, 2019
- Why it’s time to get serious about NFV - September 13, 2018