2019 Is All About Data - from AI and Machine Learning to Edge Computing, Data Center Automation and Data Protection Featured

1 month ago
(1 Vote)
2019 Is All About Data - from AI and Machine Learning to Edge Computing, Data Center Automation and Data Protection Image Credit: monsitj/Bigstockphoto.com

Happy New Year and welcome to 2019, a year full of possibilities.

2018 was a year of maturity for Digital Transformation, and most companies are committed to transforming their companies. They have laid out their strategies and are allocating resources to this transformation. Public cloud, agile methodologies and devops, RESTful APIs, containers, analytics and machine learning are being adopted. Against this backdrop there are five trends for 2019 that I would like to call out.

#1: Companies Will Shift from Data Generating to Data Powered Organizations

A 2017 Harvard Business Review article on Data Strategy noted that “cross-industry studies show that on average, less than half of an organization’s structured data is actively used in making decisions - and less than 1% of its unstructured data is analyzed or used at all”. Deployments of large data hubs have only resulted in more data silos that are not easily understood, related, or shared. In order to utilize the wealth of data that they already have, companies will be looking for solutions that will give comprehensive access to data from many sources. Data curation will be a focus to understand the meaning of the data as well as the technologies that are applied to the data so that data engineers can move and transform the essential data that data consumers need to power the organization. More focus will be on the operational aspects of data rather than the fundamentals of capturing, storing and protecting data. Meta data will be key, and companies will look to object based storage systems to create a data fabric as a foundation for building large scale flow based data systems.

#2: AI and Machine Learning Unleash the Power of Data to Drive Business Decisions

AI and machine learning technologies can glean insights from unstructured data, connect the dots between disparate data points, and recognize and correlate patterns in data such as facial recognition. AI and machine learning are becoming widely adopted in home appliances, automobiles, plant automation, and smart cities. However, from a business perspective, AI and machine learning has been more difficult to implement as data sources are often disparate and fragmented and much of the information generated by businesses has little or no formal structure. While there is a wealth of knowledge that can be gleaned from business data to increase revenue, respond to emerging trends, improve operational efficiency and optimize marketing to create a competitive advantage, the requirement for manual data cleansing prior to analysis becomes a major roadblock. A 2016 Forbes article published a survey of data scientists which showed that most of their time, 80%, is spent on massaging rather than mining or modeling data.

Hubert Yoshida,
Vice President and CTO,
Hitachi Vantara

In addition to the tasks noted above one needs to understand that data scientists do not work in isolation. They must team with engineers and analysts to train, tune, test and deploy predictive models. Building an AI or machine learning model is not a one-time effort. Model accuracy degrades over time and monitoring and switching models can be quite cumbersome. Organization will be looking for orchestration capabilities to stream line the machine learning workflow and enable smooth team collaboration.

#3: Increasing Data Requirements Will Push Companies to The Edge with Data

Enterprise boundaries are extending to the edge - where both data and users reside, and multiple clouds converge. While the majority of the IoT products, services, and platforms are supported by cloud-computing platforms, the increasing high volume of data, low latency and QoS requirements are driving the need for mobile cloud computing where more of the data processing is done on the edge. Public clouds will provide the connection between edge and core data centers creating the need for a hybrid cloud approach based on open REST or S3 App integration. Edge computing will be less of a trend and more of a necessity as companies seek to cut costs and reduce network usage. The edge will require a hardened infrastructure as it resides in the “wild”, outside the protection of cloud/data center walls.

#4: Data Centers Become Automated

The role of the data center has now changed from being an infrastructure provider to a provider of the right service at the right time and the right price. Workloads are becoming increasingly distributed, with applications running in public and private clouds as well as in traditional enterprise data centers. Applications are becoming more modular, leveraging containers and microservices as well as virtualization and bare metal. As more data is generated, there will be a corresponding growth in demand for storage space efficiency. Enterprises need to make the most of information technology - to engage with customers in real time, maximize return on IT investments and improve operational efficiency. Accomplishing this requires a deep understanding of what is happening in their data centers to predict and get ahead of trends, as well as the ability to automate action so staff are free to focus on strategic endeavors. A data center is like an IoT microcosm, every device and software package have a sensor or log and is ripe for the application of artificial intelligence (AI), machine learning and automation to enable people to focus on the business and not on infrastructure.

LATEST FREE WHITEPAPER - DOWNLOAD NOW

Automation must be based on a shared/open API architecture that allows companies to simplify transmission of data across a suite of management tools and 3rd party tools. Everything companies have must be API-based so that they can draw information in from other sources to create a more intelligent solution, and they can also pass information out if they’re not the master in the environment, so they can make other things smarter. There is a much broader opportunity to deliver better solutions if companies integrate with more vendors and partners.

#5: Corporate Data Responsibility Becomes a Priority

The implementation of GDPR in 2018 has focused attention on Data Privacy and required companies to make major investments on compliance. All international companies that are GDPR compliant now have a data protection officer (DPO) in an enterprise security leadership role. Data protection officers are responsible for overseeing a data protection strategy and implementation to ensure compliance with GDPR requirements.

The explosion of new technologies and business models are creating new challenges as companies are shifting from being data generating to data powered organizations. Big Data systems and analytics are becoming a center of gravity as business realize the power of data to increase business growth and better understand their customers and markets. This has been fueled by the advances in technologies to gather data, integrate data sources, search, and analyze data to derive business value. The most powerful companies in the world are those who understand how to use the power of data. Relative new comers like Amazon, Baidu, Facebook, and Google have achieved their prominence through the power of data. However, with great power comes great responsibilities.

IT must provide the tools and processes to understand their data and ensure that the use of that data is done responsibly.

Hubert Yoshida is responsible for defining the technical direction of Hitachi Data Systems. Currently, he leads the company's effort to help customers address data life cycle requirements and resolve compliance, governance and operational risk issues. He was instrumental in evangelizing the unique Hitachi approach to storage virtualization, which leveraged existing storage services within Hitachi Universal Storage Platform® and extended it to externally-attached, heterogeneous storage systems.

Yoshida is well-known within the storage industry, and his blog has ranked among the "top 10 most influential" within the storage industry as evaluated by Network World. In October of 2006, Byte and Switch named him one of Storage Networking’s Heaviest Hitters and in 2013 he was named one of the "Ten Most Impactful Tech Leaders" by Information Week.

Prior to joining Hitachi Data Systems in 1997, Yoshida spent 25 years with IBM storage division, where he held management roles in hardware performance, software development and product management. Since joining Hitachi Data Systems, he has worked to develop open standards for storage management, and in his previous role as vice president of Data Networks, he helped to define the Hitachi Data Systems strategy for storage area networks, network attached storage, and other network-related, storage and data technologies.

Yoshida is a graduate of the University of California at Berkeley, with a degree in Mathematics. He was a U.S. Marine Corps platoon commander during the Vietnam War and was discharged with the rank of Captain. Yoshida has authored several papers on storage area networks, Fibre Channel, multiprotocol SANs and storage virtualization technologies. He has served on the advisory boards of several technology companies and currently chairs the Scientific Advisory Board for the Data Storage Institute of the Government of Singapore and sits on the IEEE Technical Field Awards committee.

PREVIOUS POST

2019 - It’s Not Just About 5G!

NEXT POST

Ethernet in 2019

TAD Summit Asia 2019 - Kuala Lumpur

Customer Experience Management in Telecoms World Summit 2019

THE EDITOR'S DESK

ON FACEBOOK

ON TWITTER