To set the stage, Oliveira described Verizon’s present challenges:
First, declining margins pervade. The amount of carried traffic is exploding, driven by demand for video and cloud services generally.
Second, the network itself delivers a low ROI despite what many people assume. The reality, Oliveira explained, is that carrier networks are built to accommodate peak usage, and as such are over-provisioned most of the time.
Third, carriers carry high costs. High costs come from multiple places.
Most importantly, the observed decline of core network equipment is not following Moore's Law. That’s big, said Oliveria.
Heterogenous networks (involving multiple technologies running through and between layers) also require multiple, high-specialized teams. And further, there is (as yet) little automation for configuration and provisioning of resources (on this topic, read our summary of Cisco CTO Lew Tucker’s Summit talk here).
Business lock is a related issue for Verizon and a potential cost escalator too -- a lack of network automation risks long provisioning delays, and slow service innovation.
Verizon is taking NFV very seriously therefore, because the benefits include:
- Use of common hardware platforms
- More efficient data center operations (including fewer silos)
- Faster time to market
- Reduce business risks
- Graceful upgrades to software
So, why OpenStack?
First and foremost, Oliveira emphasized that OpenStack offers de facto implementation of a VIM (Virtual Infrastructure Manager). That fact in and of itself is enough to use OpenStack.
Also noted was a critical mass of vendors that are porting and developing applications (VNF’s) targeted at OpenStack, Likewise, integrators have developed the necessary deployment expertise using OpenStack, Oliveria said.
Next, OpenStack is a common environment that reduces vendor dependencies. This was a recurring theme throughout the Summit overall, and is a point of emphasis for Verizon too.
Finally Oliveria said, OpenStack components are slowly being tuned to the needs of carriers, a trend that he said was essential to Verizon’s ongoing efforts, and a big motivator on their path to superuser status.
Oliveria is tasked with architecting a common platform that can be used across Verizon for running VNF’s (Virtual Network Functions), as well as other internal applications. That process is nearing completion, and production is around the corner.
The lab’s setup had to be:
- Hardware agnostic, with a network focus, because Verizon has a large percentage of “north-south” traffic
- Run on COTS (commercial off-the-shelf) hardware and software as much as possible
- Focused on delivering a high level of operational service -- meaning:
- High availability, with no interference between applications (that is, “minimal noisy neighbor impact” as Oliveria put it)
- QoS control of the networking path
- And no over-provisioning
Finally, a self-service portal was deemed essential, with multiple tenants, security and isolation (with role based access control).
Here is what Oliveria and his team ran, against an DDS (Distributed Data Sites) evaluation scenario with a CDN use case:
- 64x Scalable Servers (4 Sleds in 2U)
- 2x 8 core CPUs, 128GB RAM, 6x Drives,
- 2x 10/40Gb NIC
- 4x 1U rack servers
- (provisioning and monitoring)
- 4x 48x10Gb/12x40Gb OpenFlow capable switches
- 1x 48x10Gb/12x 40Gb OpFlex capable switch
- 2x 36 port 40Gb OpenFlow capable switches
- 6 OpenStack Environments
- 1x Grizzly, 3x Havana, 2x Icehouse
- KVM, ESXi, Docker "hypervisors"
He and his team used the lab, notably, to evaluate multiple components in multiple combinations, although Oliveria was careful not to name any winners.
As he remarked, “Verizon does not allow me to comment on specific companies or vendors, but let me just say: the end result definitely works, and works quite well.”
Still, Oliveria laid out who and what was variously involved along the way:
- Software Vendors:
- Mirantis, Piston, Red Hat, Canonical, Wind River, ALU, HP, Dell
- Red Hat PackStack & Foreman
- Canonical/Ubuntu MaaS + Juju
- Mirantis Fuel
- Ceph, Gluster, LVM+iSER
- Volume Allocation Filters
- Open Virtual Switch vs NIC embedded Switch vs Commercial Overlay
In parting, Oliveria outlined for the crowd he and his team’s key lessons learned along the way.
In terms of weaknesses, high availability still needs work in OpenStack, although Icehouse looks much better in testing, he explained.
Oliveria also noted that Linux and KVM still need work, especially around NUMA memory and scheduling, and SSD as cache.
Neutron support for SR-IOV is lacking Oliveria also said, though he was careful to add that both SR-IOV and DPDK have provided Verizon performance benefits “by a huge amount”.
Before taking questions, Oliveria emphasized working with a knowledgeable, well-connected vendor who can quickly debug problems and provide patches. He said that pushing fixes upstream so that you don’t have to patch again on the next version is essential so that you can focus on innovating.