Network management has always involved the need to maintain some form of inventory of the devices, connections, and base configuration of those components. For service providers, inventory has always been one of the most difficult things to manage accurately due to their large, complicated networks. Knowing just what is deployed physically is hard enough, but when adding on the logical components and services that ride atop those, it is almost impossible to be 100% accurate. In fact, it’s a generally accepted industry stat that service provider inventories are about 65-75% accurate. This has proven to be adequate in the past but will be unable to properly support future activities.
In the enterprise, networks tend to be smaller and less complicated, traditionally used for internal purposes, with the exception of large organizations with networks that span the globe. As a result, the inventory challenges mentioned above are easier to manage and for the most part, merely using a view of the live network itself as your inventory is sufficient.
However, the introduction of software-driven network technologies is changing the game for both service providers and enterprises. The saving grace for both used to be how static the network tended to be as things didn’t change that quickly, and this left time for manually reconciling items that were out of sync or missing. But, as Software Defined Networking (SDN) and Network Function Virtualization (NFV) have emerged, the network has ceased to be a slow moving, static entity. Due to virtualization, the number of devices in the network can change almost hour to hour and with SDN technologies maturing, even if the device count remains the same, the connectivity between those devices is becoming a dynamic environment where event driven changes can happen in seconds. The types of events driving these changes include configuration changes, outages, and capacity issues such as those experienced on Super Bowl Sunday.
As a result, the view of the components that make up the network and the ways they are interconnected can no longer be maintained as they have been in the past. A traditional relational database won’t cut it anymore. Engineers can no longer rely on logging into devices to see what is available, especially with an elastic network and evolving topology that is constantly being defined by the SDN tools in use.
To manage NFV and SDN, operators are implementing a plethora of orchestrators and controllers, which is only adding more complexity to the network. With no unified standards in place, this complexity is necessary as there isn’t a single controller or orchestrator that can provide everything needed.
The path to network transformation is not all doom and gloom. There is an answer to all of this complexity and the dynamic nature of the evolving “modern network”. That answer lies in four key ideas: data federation, automation first, API driven, and low-code environments.
Data federation: Network data is going to exist in multiple places. It will exist in the network itself, in EMS/NMS systems that operators may use, in Operational Support Systems (OSS) that may exist to house inventory, and pertinent data may even exist in Business Support Systems (BSS), such as billing or CRM, that are needed to fully manage the network. As such, federation of all of this data into a common view for network engineers is critical. The full picture of the network, and all of its components, is required.
Automation first: NFV, SDN, and other emerging technologies are making the network increasingly dynamic. They are also adding new components and layers of complexity that operators have not had to deal with before. The worlds of IT and Networking are on a path to intersection. The way to handle this additional complexity is through automation. It is becoming critical that operators take an automation first approach to rolling out these new technologies. Designing the manual process for human hands and then trying to automate that exact process is not sustainable. Instead, designing the process for automation first and focusing on what is required of the automation toolset is the better course of action. If automation is viewed as mission critical instead of a nice to have, the need for the manual process definition will go away.
API driven: In this new networking approach, everything has an API. Orchestrators, controllers, and even many of the new network devices can be communicated with directly via API. This opens up new capabilities and should accelerate automation. When every component can be controlled via software and controlled like software, the benefits of automation increase exponentially.
Low-code environments: Lastly, as software eats the networking world, it is becoming more obvious that a skillset gap exists. Network engineers are not developers, and developers are not network engineers. How then is that gap closed so that both can work on network specific automations? The answer is a low-code tool that abstracts the complexity of development into a drag and drop environment while adding built-in network intelligence that abstracts the complexity of how to engineer a network. An environment that enables NetOps - where development expertise and network engineering knowledge are bonuses versus necessary requirements is important in achieving automation of the “modern network”.
The bottom line is that the “modern network” is coming. It is going to be a software-driven, dynamically changing network made up of physical and virtual devices and connections that cannot be managed manually by adding more bodies to handle scale. The only way to successfully operate this network is automation first, automation last, and automation always!