2017 to See More of SD-WAN, Virtualization of Application Layer Monitoring and Software Based Network Instrumentation

2017 to See More of SD-WAN, Virtualization of Application Layer Monitoring and Software Based Network Instrumentation Image Credit: Accedian


2017 will be a year of increased software-defined WAN (SD-WAN) adoption in the enterprise market, a trend driven mostly by the need for more bandwidth at lower cost. This trend began to exert more influence during the second half of 2016, but it will really pick up in the new year.

Both enterprises and service providers alike are investing in SD WAN. We’ve seen fortune 100 companies like Johnson & Johnson and Fidelity, as well as service providers like CenturyLink, Time Warner Telecom, and Comcast, working through concrete projects to make SD WAN a reality in their networks.

Hand-in-hand with the adoption of SD-WAN is the requirement for overarching, end-to-end performance visibility, putting companies that specialize in these types of solutions in a position to make a difference and help this new technology succeed.


As the new year gets underway, there will be an increased focus on the part of service providers to gain more visibility into the application layer of their networks. They will begin to take a much more serious approach to this, because without that visibility they won’t be able to differentiate based on quality of experience (QoE)—the new benchmark for success.

The virtualization of networks—which results in much more dynamic traffic flows—significantly impacts the tools that service providers will use for this new level of monitoring. Traditional techniques of tapping lines to capture and analyze data will no longer be effective ways to predict exactly how traffic flows from end-to-end.

As virtual network functions (VNFs) move, scaling up and out at the service of SDN control and orchestration, not only can their location change—introducing latency and different network dynamics—but so can their performance capabilities, as more processors or RAM can be allocated on the fly. Performance assurance solutions now have to be able to cover this virtual world in real time, to catch any momentary or discontinuous outages or ‘soft failures’ that wouldn’t normally occur in physical networks. Enterprises will be watching closely to see how service providers address application-layer monitoring, because they are increasingly dependent on cloud-hosted applications, so the reliability of their access feed to the cloud is of utmost importance to them. In many cases, that feed carries the whole business, so its performance is crucial.


As enterprises continue to move applications to the cloud, data encryption becomes increasingly important. During 2017, this will directly intersect with the application-layer monitoring trend, demanding the need for new assurance techniques.

Using classic deep packet inspection (DPI) tools won’t be feasible anymore; they are rendered useless when traffic is encrypted. How will monitoring systems adapt to this new reality?

Patrick Ostiguy,
President & CEO,

One method of application layer monitoring (ALM) seeing increased development activity is a technique that relies on analyzing how traffic traverses a network, and where it originates (source server) and ends (destination, like a phone, PC, or another server), using the TCP frame information, various forms of reverse DNS lookup, and SSL certificate analysis to reach deep insight without decryption.

This method, which relies on advanced big data technology to reassemble sessions and analyze them, needs no DPI, and offers much of the same (and sometimes more) insight. Stay tuned in this space as performance assurance experts work with leading service providers to refine and introduce this method in 2017.


The other trends discussed here paint a clear picture that service providers and their enterprise customers are making a departure from hardware-based network monitoring, and instead are adopting software-based solutions. The dynamic, elusive nature of traffic patterns over next generation networks makes it much more difficult, or even impossible, to assure network performance using hardware.

This means there will be an even larger need in 2017 for more software-based network instrumentation. Service providers will continue to push for this, because it’s less expensive to deploy and operate. The proliferation of compute capabilities onboard base stations and other infrastructure is a major driver for technologies like virtual customer premises equipment (vCPE), which makes it possible to distribute software agent instrumentation throughout the network.

To date, some of the largest operators in the world have deployed virtualized instrumentation from a variety of vendors at national and multinational scale, including China Mobile, Telefonica, AT&T, and Comcast. There is already a significant shift towards these hardware-free methods—we saw more virtual than physical instrumentation deployments in 2016, and expect this trend to accelerate.

2016 was the year where software-based instrumentation reached the demanding performance specifications and scalability of leading hardware solutions. 2017 will be a tipping point, and the market will really open up.


During 2016, there was a lot of consolidation in the service provider market. In the U.S., that included deals between Level 3 and Time Warner Telecom, Consolidated Communications and Fairpoint, and Charter and Time Warner Cable, just to name a few. Elsewhere, the global scale of this trend is illustrated by activity like BT Group’s acquisition of EE in the U.K., the Liberty Global-Vodafone merger in the Netherlands, the merger of Reliance and Aircel in India, and GTT’s transatlantic cable expansion with the acquisition of Hibernia Networks.

This coming year will be pivotal for these new entities; they’ll be mostly focused on stabilizing their infrastructures, unifying, and putting in place required tools and solutions to have end-to-end performance visibility. Their focus will be on how to differentiate with their new footprints. In most cases, this means offering premium performance.

About The Author:
Patrick Ostiguy founded Accedian in 2004 and serves as President, Chief Executive Officer and member of the Board of Directors. With over more than fifteen years of pertinent telecom industry experience, Mr. Ostiguy is responsible for filing of several Patent Applications and has published dozens of industry conference proceedings and technology articles. Prior to founding Accedian, Mr. Ostiguy was co-founder of Avantas Networks and launched the industry’s first portable Ethernet and SONET field-services test-set. Avantas was acquired for US$93M by EXFO Electro-Optical Engineering and Mr. Ostiguy continued his role in product management and participated in several M&A activities at the executive level. Prior to Avantas, Mr. Ostiguy was part of the initial Positron Fiber Systems (PFS) team that commercialized the industry’s first Ethernet over SONET Multi-Service Provisioning Platform (MSPP). PFS was acquired by Reltec in 1997 for $200M, and subsequently by Marconi. Mr. Ostiguy is the recipient of the 2011 Ernst & Young’s Entrepreneur of the Year award in Quebec in the technology solutions category for recognition of his exceptional leadership in driving growth, development and innovation in the industry.


Massive Transformation of Core Networks to Take Lead in 2017; Millennials to Drive Further Demand for CSPs' Services


LTE-A, C-RAN, SDN, NFV, FTTH, IoT: Mobile Network Acronyms Put To The Test in 2017



Artelligence 2019

Telco AI Summit Asia

Network Virtualization and SDN Asia

Network Virtualization & SDN Americas