While the industry is lit up with conversation around the race towards virtualization, the fog around first steps becomes thicker and thicker. What are the first steps towards creating virtual networks? The race towards deploying new services is fast and furious, and the benefits around virtualization are clear - faster time to market means first to win customer. Or does it?
While the conversation industry-wide has predominantly been product focused, the need to understand the complexities and importance around network performance is less often addressed. Yet, the importance of quality and reliability in the delivery of new services is more important than ever. Enter in the importance of complex data analytics and reporting for virtual services.
Virtualized services put an increased pressure on carriers to understand the health of their networks and remediate and troubleshoot problems on the fly. But what specific analytics should carriers be looking for? Here is a snapshot of the guide to data analytics for virtual networks.
Service Quality - At the end of the day, operators face stringent SLAs with their enterprise customers. Having meaningful data around service quality are critical to fulfilling these SLAs and guaranteeing the same, high quality reliable service they are used to in a legacy network environment.
Regional Performance - Second to overall network health is the ability to gatherinsight into which markets, regions, or customers are performing poorly. Just as important as it is to identify where the problem is, it is important to see this information in real time and have analytics that help managers quickly prioritize response and resolve problems.
Insight - It is not enough to just identify the challenges. Generating impactful reports is critical to top-down and bottoms-up analysis and necessary remedial action.
Service quality management prior to the age of virtualized services was, if not simple, at least much less complex and more direct than it will be in the virtual age.
Previously, if a network operator wanted to monitor services with an eye toward guaranteeing SLAs, everything they needed to monitor existed as a fixed piece of hardware. That made it easier to pull analytics on the state of the service at a given time, and those analytics and the related reports drawn from them tended to have a longer shelf life because equipment and connections were unlikely to change overnight.
Today, that increasingly is not the case as networks become more dynamic and fluid structures. In some cases, a network function that used to be represented by a piece of hardware equipment in the network is now an application running on a virtual machine that itself is running on a server—and even more challenging is the notion that the application may be up and running one moment and gone the next as customer needs dictate dynamic set-up and tear-down of services and applications.
To track what’s going on in these types of network, operators need monitoring systems that have been designed to interface with virtualized network components, NFV manager, SDN controllers and orchestration platforms. But, they also need flexible analytics capabilities that can tell them what’s going on and can deliver both a root cause analysis and an impact analysis of events on the virtual services. These capabilities are key to help them determine not only what happened and how widespread the damage is, but also how specific SLAs could be affected and how to automate remediation and optimization.
Managing regional performance in real time
Managing network performance across a region in a virtualized network environment requires analytical capabilities that can get a sense of the coming dynamic networks on a broad scale and in real time. Any solution used must be capable of analyzing many data points and producing actionable intelligence, also in real-time.This requires new thinking about two traditional models of network investigation – root cause analysis and impact analysis.
Root cause analysis is an analytics function that, in legacy networks, was more rules-based and relied on having detailed knowledge of a network’s topology at a given time. In a legacy network, that topology was not likely to change, but as mentioned earlier, the virtualized network is ever-changing, and the same rules can’t be followed. So, it’s important to have tools that can comprehensively and automatically model causality and normalize hard and soft failures across a multi-vendor virtualized network topology.
Having automatic correlation to analyze collected data and closed loop feedback to automate remediation and network optimization is key to success in virtual service offerings. These tools and capabilities must be able to determine root cause across not just physical components, but virtual components, compute nodes and applications.
Impact analysis also becomes a different sort of science in the virtualized network era. How can the real impact of an event be known if the network environment has a mix of physical and virtual elements, and changes dynamically?
Cross-domain graph database technology can analyze what is happening with network nodes and connections in real-time to build a comprehensive regional map of the service topology as it exists at any moment. Such service graphs can be integrated with inventory and orchestration systems to help provide a full view of the network and resources so that decisions can be made in real time and changes automated in order to optimize performance across that region.
Seeing is believing - and understanding
Being able to collect vast amounts of data from a virtualized network and quickly analyze all of it is challenge enough, but ultimately doesn’t amount to much if it can’t be processed and presented in the right way to help them make cogent decisions about network health and SLAs.
Bringing all this complex data into a dashboard view helps network officials synthesize and absorb what could otherwise prove an overwhelming job. The ability to take analytics data and turn it into customized reports for different parts of the organization also is important. When people can get their hands on richly detailed reports they can easily understand, they can make informed decisions on how best to optimize and grow their network and services.
There is indeed a race to develop new services to generate more revenue and improve competitive edge. A virtualized network environment can allow that to happen faster and with greater flexibility, but it’s also important for service providers to understand just how much virtualization changes the game for them. It’s a new network with, for all its benefits, a very complex nature. To tap its full benefits, operators need to understand how it performs, and when it doesn’t perform, what went wrong and where. For that, they need monitoring capabilities and associated analytics that are built for the complexities of this new environment. The old tools just won’t be able to keep pace.