The new data decade’s other disruption

The new data decade’s other disruption

Chris Greenwood, Head of UK&I at NetApp, offers an insight into how new technologies might disrupt what data looks like from an organisation’s perspective – and how this knowledge could potentially help CIOs begin preparing for the future.

As we move into a new decade, with cloud computing continuing its shift from being a new frontier to being part of standard operating practice and Digital Transformation holding fast as the top priority for many businesses, CIOs are looking towards the next wave of disruption and asking where the technological opportunities and risks will be in coming years.

The changing data landscape

There is no shortage of candidates. The UK is already leading, keen to take advantage of the higher bandwidths and lower latencies 5G promises, enabling new types of networking. According to a recent Gartner report, 7% of communications service providers worldwide have deployed 5G infrastructure in their networks and commercial applications will only gather pace when the next release of the 5G standard arrives in June 2020, bringing with it further details on Ultra-Reliable and Low Latency Communication (URLLC), vehicle-to-everything (V2x) communication and Industrial Internet of Things (IIoT) applications. 

The IoT itself represents a significant area of potential disruption which, fuelled partly by increasing 5G availability, will draw vastly more endpoints into the enterprise network. The GSMA has estimated that the number of IoT connections will reach 25 billion by 2025, pushing global IoT revenue to over US$1 trillion, as everything from security systems to solar panels to supermarket shelves begins reporting data into the network.

In order to make sense of all this data, organisations will increasingly turn to various forms of Artificial Intelligence (AI) to gather it, analyse it and apply it back to business processes with a degree of accuracy and efficiency unattainable for human operators. This change, too, is already gathering enormous pace, with the research firm, IDC, predicting that 75% of enterprises will be employing some degree of intelligent automation by 2022.

Those AI tools demand enormous resources in terms of data storage and processing, and while some of that will happen in the public cloud, AI will also drive growth in the amount of computation done on premises at branch sites and within IoT devices – with Forrester expecting the Edge cloud service market to grow by over 50% in 2020 alone. This Edge Computing capacity will be vital for applications where low latency is at a premium, such as autonomous vehicles and financial trading, or where the data’s sensitivity makes transmitting it to the cloud riskier, such as medical applications.

Mastering the data

This shift is widely-discussed and unlikely to be news for IT professionals. Each of these technological trends has significant disruptive potential, on the order of the changes which cloud computing has enabled across industry sectors over the last decade. These environments will make possible new use cases that rely on intelligent, instantaneous and autonomous decision-making – and, as it was with cloud computing, it is difficult now to predict how the disruption will play out and what the major benefits and pitfalls will be.

One thing that is clear, however, is that they will stimulate a diversification in where and how data is held and processed. As a high-stakes environment with deep potential to adopt the kinds of data-driven methods which these technologies enable, healthcare is an ideal example. A report on AI in healthcare from NHSX the use natural language processing on patients’ medical records as an exemplar of effective AI usage, both saving significant amounts of money through efficiency gains and increasing accuracy of data recording and retrieval – and therefore helping to improve patient outcomes. As systems like these become widespread, it’s easy to imagine them taking full advantage of the new data reality, with medical devices drawing patient data through 5G connections and using AI-driven analytics to process it alongside locally-collected data to make informed treatment decisions.

There is, however, a tension in this promise: in order to be as useful as possible, massive amounts of data held in different systems need to be made mutually accessible, but at the same time patient information is highly sensitive, requiring strict oversight to avoid misuse and maintain trust. In fact, organisations in all sectors pursuing data-led strategies and increasing the amount of computation performed at the edge will encounter challenges like this.

The response to this challenge is to think in terms of architecting a data fabric. Whereas enterprises could once build IT systems on the assumption that data is being passed to and from a centralised mainframe and design processes and policy around that fact, there is now little scope to know in advance where and how data will be stored and used. In order to bring consistency to data management, allowing diverse datasets to be used together without the expense and risk of building bespoke connections between each application and each source, a data fabric sits across all of an organisation’s data and provides a unified view of it, while preserving management policies.

We may not yet know precisely where, how and when 5G, IoT, AI and Edge Computing will disrupt different industries. We do know, however, how they will disrupt what data looks like from an organisation’s perspective – and this knowledge can help CIOs begin preparing for the future today.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive