The future Internet will use virtual infrastructure, have built-in monitoring functions, and will be largely encrypted. In this article, I will describe developments in each of these areas in 2017.
#1: Virtual infrastructure will have built-in monitoring
The growing importance of SDN/NFV-based architectures in carrier networks changes the requirements for monitoring solutions. For fully effective monitoring, network operators must now be able to monitor not only traffic between physical interfaces but also logical interfaces all the way up to the application layer (Layer 7). This requires a virtual probe function integrated in the NFV infrastructure (NFVI).
This new probe function will be especially important for Service Providers as they migrate toward SDN and NFV-based architectures and will be built into the infrastructure.
The problem with current probes and taps
Existing approaches have some critical limitations:
- Taps or splitters cannot reach the logical interfaces that use internal VM-to-VM communication between functions hosted on the same server
- Containers rely heavily on distributing the traffic between many transient, virtualized resources and legacy probes lack the flexibility to serve such virtual network elements
Why a Virtual Probe?
A built-in virtual probe can monitor both external physical interfaces, and VM-to-VM communications. It is a software entity, which can be attached to logical or physical interfaces and can be instantiated as a VM, a container, or a process belonging to the hypervisor hosting the VMs. The monitoring of traffic between virtual network elements is provided by a virtual probe function.
A virtual probe monitoring virtual network functions reduces the CAPEX and OPEX associated with the monitoring solution by using standard off-the-shelf hardware rather than proprietary appliances. Enabling virtual probes to aggregate dynamic counters as early as possible in the processing chain further reduces the complexity and cost of analytics solutions.
#2: It will be easier and quicker to develop new, service aware network functions
Communication Service Providers are looking for next generation solutions based on SDN and NFV, and in particular they are looking for ways to leverage the OPNFV architecture. This means that their suppliers, the networking vendors, need an efficient framework to develop these new, carrier-grade high-performance VNFs. Until now, they only had some basic technology such as Linux, Iptables, OVS, or Intel DPDK, and as a result development remains complex and costly. In addition, it has traditionally been complex and costly for developers to embed real-time traffic visibility in the form of Deep Packet Inspection (DPI).
VP of Marketing,
Enters Vector Packet Processing (VPP): making it easier to develop new, high-performance networking products
VPP is a high-performance, packet-processing stack which runs on commodity CPUs. This virtual switch module was made open-source by Cisco in early 2016, as part of the Linux Foundation project FD.io (“Fido”), focused on solving new networking challenges. VPP has a track record of high performance, flexibility, and a rich feature set. For the networking industry, it is a new disruptive technology with the potential to both lower cost and risk for teams developing a new generation of virtualized networking applications.
There is now an opportunity to use VPP as a framework to build applications faster and to improve VNF performance. Prototyping by Qosmos R&D has resulted in promising results: 1) several stateful applications can coexist on a single VPP, 2) it is possible to scale from small devices such as CPEs all the way up to core VNFs. VPP is appropriate for firewalling and performance monitoring applications when it uses Deep Packet Inspection (DPI).
Adding some DPI spice to VPP
VPP in itself is good, but our practical experience shows that VPP must be complemented with real-time traffic visibility provided by DPI software, linked to shared flow tables, and fully integrated and monitored through OPNFV using standard management tools such as OpenStack for orchestration and OpenDaylight (ODL) as a controller.
In a nutshell, by combining VPP with ready-to-use DPI software, developers can work in a DevOps mode to accelerate time to market for new, high-performance and application-aware VNFs.
#3: Encryption is here to stay, but it may not be a problem
Why is traffic encryption on the rise?
Encryption on the public Internet is constantly rising, with some estimates showing that over 70% of traffic will be encrypted by the end of 2016. A few content providers (e.g. Facebook, YouTube, and Netflix) are responsible for most of the encrypted traffic. This is globally a positive evolution toward protecting privacy on the Internet, a trend accelerated since Snowden’s revelations about NSA interception activities.
Similar encryption trends can be observed for datacenters, with Yahoo, Google, and Microsoft encrypting all their data center traffic. In the enterprise, a third of the traffic is now encrypted both for in-house traffic (email, Web apps) and cloud-based applications.
Encrypted traffic can be classified
It is important to remember that encryption does not mean that the traffic is undetectable; it just means that the content remains private. Advanced techniques can still classify encrypted traffic, enabling service providers to continue to perform policy enforcement, optimize traffic and ensure a good user experience. Here are a few examples of encrypted traffic classification techniques, with accuracy and limitations.
Example 1: Classifying traffic encrypted with SSL/TLS (e.g. https)
- Typical protocols: Google, Facebook, WhatsApp
- Classification method: Read name of service in SSL/TLS certificate or in Server Name Indication (SNI)
- Accuracy: Deterministic method – 100% accurate
Example 2: Classifying encrypted P2P traffic
- Typical protocols: BitTorrent, MuTorrent, Vuze
- Classification method: Use IP addresses of known P2P peers
- In a P2P session, the initialization phase is not encrypted. During this phase, IP addresses of peers can be identified. All flows from those IP addresses are identified as P2P (e.g. BitTorrent). Statistical protocol identification increases classification accuracy by measuring divergence from a traffic matching engine.
- Accuracy: Typically more than 90% of P2P sessions are identified
Example 3: Classifying Skype
- Classification method: Search for binary patterns in traffic flows
- This pattern is usually found in the first 2 or 3 packets
- Accuracy: 90 – 95 % accurate
Thanks to advanced classification techniques, traffic optimization, policy enforcement, and user experience are largely unaffected by encryption. This means that communication service providers can continue to leverage Layer 7 visibility to ensure service quality and manage resource utilization, while respecting subscriber privacy!
About The Author:
Erik drives all aspects of marketing for Qosmos, including product marketing, demand generation, branding and communication. Prior to Qosmos, he led marketing for Netcentrex (acquired by Comverse), and managed marketing for Web hosting provider Integra (now Level 3). His previous experience includes several international marketing roles at Nortel. Erik’s views on high-tech trends are regularly featured in articles, blog posts, webcasts, video interviews, and industry events.