Info Image

Network Analytics Become More Distributed, Prevalent and Deeper in Networks Driven by NFV Acceptance and IoT Scale Growth

Network Analytics Become More Distributed, Prevalent and Deeper in Networks Driven by NFV Acceptance and IoT Scale Growth Image Credit: Saisei

Everyone is talking about new uses of analytic data, sensors, telemetery and the role of IoT devices driving a loT more data analysis. This is true across many sectors but perhaps none as much as IP networks where analytic data itself traverses. The abundance of new devices, sensors, wearable tech and other things connecting to the Internet via both wireless and fixed networks are driving massive amounts of IP traffic and the need to better understand the traffic through analysis. This ironically creates more and more traffic if done in traditional ways of streaming flow data – typically NetFlow or IPFIX records to central locations for processing.

The idea of centralized, massive data NetFlow/IPFIX analysis and the streaming of data to these central locations breaks with IoT scale, and networks have to become more intelligent analytically. This network analytic intelligence also has to be distributed. In 2016, I think we will see major new trends emerge in this area.

#1 - INDEPENDENCE FINALLY FROM CENTRAL REPORTING AND COLLECTION SERVERS – DISTRIBUTED ANALYTICS ON-DEMAND

Streaming network analytic data across networks to centralized collection sites will give way to analytics-on–demand – available in each network location and alerting central sites only when analytic events are detected. Instead of ingesting uber amounts of network analytic data at centralized servers, these servers will program analytic events at each remote point in a network and get informed only when these events happen.

As NFV sees wider adoption this year, the availablity of general-purpose computing in our networks will rise tremendously. Networks have traditionally had compute power for forwarding planes but very little for general computing. NFV will change this and enable networks based on general-purpose hardware to have massive amounts of general computing resources compared to networks of the past. This new compute power will enable network functions to have analytic data capabilities far more intelligent than in the past and it will be ‘the beginning of the end’ of streaming analytic data about networks to “big data” servers for processing.

Bill Beckett,
Founder & Chief Strategy Officer,
Saisei

Network analytics will become distributed, even to thousands of sites in the largest networks, than become available on-demand on an as-needed basis without the need for building massive infrastrucrture and overhead on networks just streaming NetFlow or IPFIX to a central location for processing. The former big data collectors, that ingested massive amounts of network flow data centrally, will begin getting just tiny amounts of pre-processed data from each site and be used more often to program the distributed analytic systems to only report when analytic thresholds are passed.

The bottomline of this, I think we’ll see many network operators move to a distributed analytic model vs. a streaming model like NetFlow or IPFIX to gather the best analytic and telemetry data from their networks.

#2 TIME TO TAKE ACTIONS ON ANALYTIC EVENTS WILL DROP TO 0.0 SECONDS!

2016 Trends and Outlook Polls

Because intelligence about networks will be distributed in just about every location, there will no longer be big delays between detecting analytic events at a central site and having to trigger actions remotely to respond to them. Network devices will be able to detect and act on complex analytic data as the event occurs and act on them without delay.

If your networks are taking analytic data and telemetry to central big data collectors and waiting minutes or more for useful calculations or map reduction to take place, then you’re waiting far too long for taking intelligent actions on analytics.

#3 USER QUALITY OF SERVICE WILL VASTLY IMPROVE WITH TIGHTER COUPLING OF ANALYTIC DATA AND TRAFFIC CONTROL DURING NETWORK EVENTS

In 2016, we’ll see scenarios where, for example, real-time network analytic data will sense that all customers’ Webex and Salesforce IP streams in one location are experiencing major application distress -- despite there being no major physical network errors identified on the network. Further real-time analysis will tell the operator that the same BGP AS paths exist for accessing both Webex and Salesforce, and the network operator will make a decision to shutdown a BGP peer, which resolves the issue and eliminates the distress from Webex and Salesforce applications from the CSP’s customer base.

In the past, this same incident would go unnoticed by operators, result in dozens to hundreds of calls, cost hundreds of human support hours and take 1 to 3 days to detect and resolve the issue. No more, this year we will see all this happen within a few minutes.

About The Author:
Bill Beckett is Founder and Chief Strategy Officer of Saisei. He has more than 25 years’ experience in the telecommunications industry in various network engineering, operations, sales, and marketing roles. Prior to Saisei, Bill co-founded Mobile Media Communications, a mobile wallet platform in Singapore. He was the VP and GM for Anagran in Asia Pacific and served in senior sales engineering positions in Asia Pacific for Tellabs and Vivace. He has also served in various roles at Mayan Networks, Allied Riser, OnStream Networks, MFS Datanet and British Telecom.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness

PREVIOUS POST

‘TV the Telco Way’ will be the Differentiator for Mobile and Fixed Operators in 2016

NEXT POST

IoT Adoption will Drive Use of Platform as a Service(PaaS), says Gartner