Dealing <a>with the data challenges of cloud-based infrastructure</a>

Dealing with the data challenges of cloud-based infrastructure

Rafi Katanasho, APAC Chief Technology Officer and Solution Sales Vice President at Dynatrace, tells us how organizations can deal with the data challenges of cloud-based infrastructure to ensure maximum efficiency. “To overcome this challenge, organizations are increasingly making use of what is termed Artificial Intelligence for Operations (AIOps) platforms,” he says.

Rafi Katanasho, APAC Chief Technology Officer and Solution Sales Vice President at Dynatrace

Throughout the past decade, cloud computing has revolutionized the way in which IT infrastructures are designed and deployed. Reliance on in-house data centers has declined as organizations embrace an ‘X-as-a-Service’ approach to technology.

This shift has done much to boost agility and support strategies of Digital Transformation. Cloud platforms allow organizations to be much more responsive to changes in market demand and to take advantage of new opportunities as they emerge.

In many instances, IT infrastructures are now built using multiple cloud services. This allows IT teams to select the most appropriate platform for particular applications and tie everything together as a cohesive whole.

However, while these multi-cloud environments deliver significant advantages, they also create new challenges due to their complexity and scale. Applications that span multiple platforms can often comprise millions of lines of code and generate complex interdependencies. It has reached the stage where it is now beyond human capacity to manually monitor these environments and ensure they function at maximum efficiency.

The role of AIOps

To overcome this challenge, organizations are increasingly making use of what is termed Artificial Intelligence for Operations (AIOps) platforms. AIOps combines Big Data and Machine Learning tools to automate IT operations and allow developers to focus on more strategic work.

However, while AIOps is powerful, it’s only as smart as the quality and quantity of the data that teams feed into it and this is why observability is also critical. IT teams need to capture detailed metrics and logs from multi-cloud applications and infrastructure and feed this into AIOps platforms.

This is what enables AI to give DevOps teams the insights they require to optimize applications and deliver better customer experiences.

Taming the data deluge

A challenge arises, however, from the fact that all this data is leading to organizations becoming overwhelmed by the volume of data generated by the thousands of microservices and containers that comprise a multi-cloud environment.

IT teams are finding it difficult to keep up using traditional log monitoring and analytics solutions. As a result, it’s becoming more difficult for organizations to ingest, store, index and analyze observability data at proper scale.

Additional challenges are created by the number of data silos that have been created as organizations have used multiple monitoring and analytics solutions for different purposes. This fragmented approach makes it difficult to analyze log data in context, which limits the value of the AIOps insights that an organization can unlock.

At the same time, organizations are often forced to move historical log data into ‘cold storage’ or delete it entirely to reduce costs. While this makes log analytics more cost-effective, it also reduces the impact and value it brings to modern AIOps-driven approaches.

If log data is shifted into cold storage, organizations are unable to use AIOps platforms to query it on demand for real-time insights. The data needs to be rehydrated and reindexed before teams can run queries and gain insights, which can take hours or even days.

This delay may create significant problems as insights become outdated and deliver less value to the business.

Boosting observability in a multi-cloud environment

Enthusiasm for using multi-cloud environments shows no signs of slowing as the business world’s appetite for digital services continues to grow. As a result, organizations must find new approaches to capturing, ingesting, indexing and operationalizing observability data.

This, in turn, is creating the need for log analytics models designed to keep pace with the complexity of multi-cloud environments and scale limitlessly with the huge volumes of metrics, logs and traces they create. Data ‘lakehouses’ are a powerful solution as they combine the structure, management and querying features of a data warehouse, with the low-cost benefits of a data lake.

Taking this approach eliminates the need for teams to manage multiple sources of data, piece them together manually and move them between hot and cold storage. It also increases the speed and accuracy of AIOps insights.

Embracing such a strategy can enable a business to unlock data and log analytics in full context and at enormous scale, enabling faster querying that delivers better answers from AIOps.

Businesses that are able to achieve this will be well placed to improve their ability to deliver superior experiences for their customers and achieve a valuable edge over competitors.

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive