With an increased reliance on cloud services, it’s all too apparent that dedicated management and protection measures are necessary.
This is especially true in an environment where a significant number of employees use non-corporate software and cloud services such as social networks, messengers or other applications.
With this in mind, we asked industry experts: How can IT leaders ensure better visibility over cloud access in the workplace?
Simon Howe, Vice President Sales APAC, LogRhythm
The challenge of gaining visibility over cloud access is deeply entwined with the challenge of ensuring IT security. Without clear oversight of all network traffic within an organization, neither is possible.
To achieve this aim, an organization needs to have in place tools that will monitor traffic and alert the security team to anomalous behavior. This could be anything from an external cyberattack to unauthorized access of a cloud resource by a staff member. Once clear visibility has been established, the team can respond in an appropriate way to each event as it occurs.
Visibility of cloud access is becoming increasingly important as the level of usage continues to grow. Indeed, more and more organizations are embracing a cloud-first strategy and steadily shifting away from on-premise IT infrastructure altogether.
For this reason, it’s highly likely that most organizations will already be making use of multiple cloud providers to fulfill different business needs. It might be Amazon Web Services for data storage, Salesforce for customer records, and Microsoft Office 365 for administrative support. All will generate traffic on the organization’s network that needs to be monitored and managed.
A good first step in improving visibility of cloud access is the deployment of a Security information and event management (SIEM) platform. A SIEM can provide real-time analysis of security alerts and determine which require the attention of the security team. Authorized use of cloud resources can then readily be separated from unauthorized.
Having this capability in place is particularly important when trying to manage the challenge of ‘shadow IT’. This occurs when staff bypass the IT department and make use of unauthorized cloud services. It might be storing corporate data in a personal account on Dropbox or diverting business email to a Google account. Staff may have no malicious intent, but the security challenges this type of cloud usage can create are significant.
Another factor that can make visibility of cloud usage challenging is the on-going prevalence of working from home. No longer tied to the corporate network, staff are instead using private Internet connections to access both centrally located applications and data stores as well as cloud resources. Having the tools in place that give visibility across this mixed environment is therefore vital.
The bottom line is that the providers of cloud platforms and services cannot be held responsible for inappropriate access by an organization’s staff. Effective steps, such as the deployment of security monitoring tools, need to be undertaken by the organization itself if effective visibility of cloud activity is to be achieved.
For organizations who are at a more advance level of security maturity, utilizing modern Machine Learning-enabled mist computing tools to enhance deep network visibility, behavior analytics and threat detection capabilities means that they will be able to collect and manipulate data more effectively and accelerate the speed of threat detection and response.
Nathan Steiner, Head of Systems Engineering ANZ at Veeam Software
Back in yesteryear, we often referred to the concept of ‘shadow IT’ but what did it really mean? As organizations began to leverage third party managed services to deliver their ICT services and historically what they saw as ‘non-core’ services, we soon started to see the behavior and concept of shadow IT start to form.
Businesses needed and wanted more visibility into the ‘what’ and ‘how’ their IT services were running. Why? Because they quickly established that they were still very much tied to the ‘risk’ and ‘impact’ of these services being key to the core business.
As we usher in the cloud and digital eras, organizations are now operating their business-critical systems, applications and datasets across a disparate, distributed set of environments and platforms. They are running a mix of on-prem, business critical data and business platforms, just as they are consuming ‘Software-As-a-Service’ platforms where required. And then add to that consumption of analytics cloud services, consumption of storage, network and compute within public cloud and there lies hybrid/multi-cloud. And it created a whole new set of challenges into visibility.
As organizations move into ‘services oriented’, ‘on-demand’ consumption of IT services that are driven and aligned to customer experience, they need to ensure complete visibility over these services. This is not only crucial in applying the fiscal prudence of services being consumed, ensuring an always-on, services-oriented availability of the platforms operationally, but even more so around managing risk when it comes to cybersecurity exposure.
So, if there were two things leaders could do to ensure better visibility over cloud access as the starting point within their organizations, they would be:
- Harness APIs: There is no such thing as a ‘Single Pane of Glass’. Drive and lead with an ‘API’ driven approach that allows for the integration of your key and strategic ‘visibility’ platforms. You will have platforms managing your large-scale virtualization, platforms managing your cloud-native, DevOps driven applications, platforms providing vulnerability and security services, as well as identity, provisioning, orchestration and automation. The way to integrate for consolidated visibility is via ‘APIs’.
- Follow the framework: Hybrid/multi-cloud ecosystems are complex and multi-faceted. The goal is to ensure the consumer, citizen and customer experience is seamless. Look at visibility across the said ecosystem in the same way. Break down your hybrid/multi-cloud ecosystem into discrete subsets. Start off with a ‘services oriented’ view that is completely business aligned. What makes up these services in terms of the technology stack? They can only exist across the following: On-prem, managed, public and SaaS. Your storage, network, compute, apps and data will exist within all four. Apply visibility from an enterprise architecture perspective.
The key takeaway here is that there must be a services-oriented approach to ‘visibility’. Why? Because the services will be underpinned by elements that exist across an entire hybrid/multi-cloud ecosystem. The ‘application to bare metal’ approach of visibility will exist across all four.
It’s the consolidation of visibility by service that will ensure you have the required visibility coverage to manage risk and impact and ensure continuity of customer, citizen and consumer experience.
Dale Heath, Sales Engineering Manager, at Rubrik A/NZ
When it comes to securing your business, access controls are one of the most fundamental principles. Roles-based access controls, for example, assign users access based on the principle of ‘least privilege’- assigning each user the precise amount of privileges required to perform their jobs, but not more.
In order to implement an effective access control strategy, IT leaders need insight into what their users are accessing so any violations can be identified. Ensuring better visibility over cloud access across the enterprise, however, requires decision makers to rethink what it is they’re seeking visibility into.
Traditionally, the prevailing wisdom has been to seek more visibility into user behavior and application access.
However, given increasing regulation and public scrutiny over data privacy, as well as the massive proliferation of cloud applications and infrastructure, IT leaders need visibility at the data level, rather than just the application and user level.
For example, with visibility into the application level, you might be able to see that user x has accessed applications y and z. What this won’t show you is the data they have accessed within each application. Critically, with so many new cloud applications and users working from remote locations, there’s the very real risk that sensitive data has been inadvertently stored in places it shouldn’t. Without visibility at the data-level, IT leaders are operating with a huge blind spot.
The other benefit to visibility across the enterprise at the data level is that Machine Learning and policy-based algorithms can automate the scanning, discovery and classification of sensitive data (such as credit card information) in order to better understand overall risk posture, ensure regulatory compliance (such as PCI-DSS compliance) and remediate violations as soon as they occur.
Further, with this granular insight across the enterprise, roles-based access controls can then be defined at the data-level – meaning that even if sensitive or confidential data is stored incorrectly, only users with the necessary permissions will be able to access it (even when located in an application they have access to).Click below to share this article