User’s demand for more bandwidth is not a novel state for the telecom industry to find itself in. However, the universal escalation and importance of the varying needs of different users, including automotive vendors, ISP’s, storage facilities and even the daily consumer, are many different pieces of the puzzle that will impact the telecom space over the coming year. Specifically:
#1: AUTOMOTIVE NETWORKING BANDWIDTH NEEDS WILL ACCELERATE
Worldwide car manufacturers are developing both “smarter” and more “interactive” cars. The impending switch to self-driving, or network dependent driver-assisting capabilities, as well as multimedia features are dramatically increasing the amount of bandwidth needed for in-vehicle, car-to-car, and car-to-roadside communication. Ethernet has been adopted by most major auto manufacturers and certification/standards bodies as the technology that will best fit these future needs for networking used within the car. Over the past three years the IEEE 802.3 group has hosted several projects targeting the automotive market. The fastest current speed of these Automotive Ethernet PHYs is 1 Gigabit per second (Gbps), but the standard body is already currently developing a 10 Gbps solution.
Alongside the improvements to the internal to vehicle network, there has already been an increase of laws and regulations mandating the storage of sensor information, as well as data related to the decisions made by the onboard software. This is to address the fact that cars will now be seen as being IoT enabled, responsible for action, and active on the network. The industry is still exploring the best solutions for such large data transfers, clearinghouse activities, and related issues, but regardless of these regulations, there will need to be major improvements to datacom infrastructure to support the addition of millions of cars sending gigabytes (or more) of data daily. Over the course of the next year, we’ll still only be seeing the very front end of this machine to machine network traffic increase, but over the course of the decade could grow to be the single largest M2M bandwidth type. And because human lives are on the line, the data needs to also be time sensitive and reliable. A reasonable expectation for data quality is running error free for 99.9999% of the time, putting an extra stress on conformance and interoperability testing.
Ethernet and Storage Technical Manager,
University of New Hampshire InterOperability Laboratory
#2: The growth in video demand and streaming services due to cable cutting will increase
While the automotive vendors need to worry about the how to deliver data reliably to and between cars and road stations, ISP’s need to worry about simply keeping the traditional cable service alive. Video streaming services currently account for an estimated 20% of internet traffic, and will only continue to increase. In the US, the “cord cutting” movement users are increasingly dropping traditional TV service subscriptions and flocking to on-demand streaming like Netflix and Hulu. Recently, ISP’s have even started offering IP based set-top boxes to deliver an equivalent TV service to the traditional one, allowing the ISP to offer a TV and internet package that only requires the customer to purchase the internet option.
80% of consumer traffic is IP video. And some home applications, such as home automation and healthcare (in house devices) sectors, are already seeing huge bandwidth growth, 20-30% CAGR in some cases. One way infrastructure companies can manage this large demand along with the repercussions of ‘cord cutting’ is by investing in and upgrading capabilities, and increasing the throughput of current installations.
#3: There will be a knee curve in bandwidth needs from IOT devices due to cloud storage of data and malicious attacks
The incorporation of IoT into sensors and systems in general has led to a shift from storing all the data locally to storing the data into a typically more accessible cloud based system. This inherently creates a couple of issues; one being how to do we store the massive amounts of data that is collected by the estimated 20 billion devices and sensors that are always on, and two, how to do we ensure the security of the data that is constantly collected. Companies have started to create their own customized server farms to meet the demands of this data aggregation, pushing the ideology of open source hardware as well.
With IoT enabled devices growing there are an increased number of exploits that hackers can take advantage of across many platforms. From small personal devices to powerful processing computers. These exploits stem from not only design defects, but also from the inevitable outcome of more people with less IT/security knowledge managing these devices. For example, during a recent prospective student visit at the University of New Hampshire InterOperability Lab, a father mentioned that he purchased a network connected home security camera with built-in microphone and speaker system and one night he heard a stranger’s voice announcing from the camera’s speaker.
Beyond these increases in power and prevalence, in the past few years there has also been an increase in larger scale botnet and DDoS attacks. Most notably there was the WannaCry ransomware and Mirai botnet. While the Mirai botnet broke the internet (so to speak), the WannaCry ransomware attack nearly shut down several hospitals in the UK by holding the infected computers hostage until a fee was paid to the hackers.
While both of these attacks were severe and either caused a loss of confidence in a company, data, or money, the impacts were both temporary and limited. It is likely that in the near term a more sophisticated and sustained attack will not only be possible, but will have wide reaching impacts and costs. What if our self-driving car had a digital “boot” locking it down until you paid it’s ransom? What happens when automated freight transports arrive at the wrong locations?
It seems highly unlikely that the demand for people, cars, homes and just about everything else to be connected to the internet will decline. In fact it is likely that demand will only increase, and not just companies, but industries as a whole will need to keep up before they fall behind. In addition to the increasing need for speed, security and reliability are just as important with the fragile ecosystems these devices are operating in. Every part of a device, from the electrical circuitry to the upper layer protocols, must be more robust than ever before in order to achieve the technological complexity we call the internet that everyone takes for granted every day.
About The Author:
Michael Klempa is the Ethernet and Storage Technical Manager of the SAS, SATA, PCIe, Fast, Gig and 10Gig Ethernet Consortia at the University of New Hampshire InterOperability Laboratory (UNH-IOL). Michael began working at the UNH-IOL in 2009, as an undergraduate student. As an undergraduate student he spent a summer at Intel in their Enterprise Storage Server Division. He obtained his Bachelors of Science in Electrical Engineering in 2013 and is now pursuing his Masters in Electrical Engineering. Michael's primary roles at the UNH-IOL are to oversee the student employees in the SAS, SATA, PCIe, Fast, Gig and 10Gig Consortia, conduct research and development in the technologies. He has participated on panels at DesignCon and has works published in EDN Network.