Synopsys expert on proactive application security strategies for uncertain times

Synopsys expert on proactive application security strategies for uncertain times

As cybercriminals take advantage of the fear and uncertainty surrounding the pandemic, it’s crucial that organisations ensure the software they build and operate is secure – despite reduced resources. Adam Brown, Associate Managing Security Consultant, Synopsys, talks us through the steps organisations can take to improve their application security programmes to protect organisational data and that of their customers.

In 2020, organisations have been faced with the prospect of months of staffing and Business Continuity challenges. Concurrently, cyberattacks by opportunistic hackers and cybercrime groups looking to profit or further disrupt society are on the rise. Organisations must ensure the software they build and operate is secure against these increasing attacks, even as their available security resources may be decreasing.

And a remote workforce is only one of the challenges organisations face in terms of securing their digital properties and sensitive data. While many companies want to invest in security, they may not know where to start. After all, it’s a challenging endeavor to identify where and how to secure your most valuable or vulnerable projects.

It’s a daunting task. However, by tactically addressing their security testing capacity, staff skills and software supply chain risks today, organisations can respond to resource challenges now while fundamentally improving the effectiveness of their AppSec program going forward. Here’s how.

Establish a benchmark and mature your strategy

Get started by gathering a full understanding of what your organisation’s security activities involve. The Building Security In Maturity Model (BSIMM) is not a how-to guide, nor is it a one-size-fits-all prescription. A BSIMM assessment reflects the software security activities currently in place within your organisation. Thus, giving you an objective benchmark whereby to begin building or maturing your software security strategy.

The BSIMM, now in its 11th iteration, is a measuring stick and can be used to inform a roadmap for organisations  seeking to create or improve their SSIs, not by prescribing a set way to do things but by showing what others are already doing.

Previous years’ reports have documented that organisations have been successfully replacing manual governance activities with automated solutions. One reason for this is the need for speed, otherwise known as feature velocity. Organisations are doing away with the high-friction security activities conducted by the software security group (SSG) out-of-band and at gates. In their place is software-defined lifecycle governance.

Another reason is a people shortage – the ‘skills gap’ has been a factor in the industry for years and continues to grow. Assigning repetitive analysis and procedural tasks to bots, sensors and other automated tools makes practical sense and is increasingly the way organisations are addressing both that shortage and time management problems.

But while the shift to automation has increased velocity and fluidity across verticals, the BSIMM11 finds that it hasn’t put the control of security standards and oversight out of the reach of humans.

Apply a well-rounded risk mitigation strategy

In fact, the roles of today’s security professionals and software developers have become multi-dimensional. With their increasing responsibilities, they must do more in less time and while keeping applications secure. As development workflows continue to evolve to keep up with organisational agility goals, they must account for a variety of requirements, including:

  • Real-time visibility into what software and services are running, as well as associated environments and configurations
  • Insight into running software’s composition
  • Automatic execution of at least the minimum required vulnerability discovery testing with each release, with results provided directly to bug tracking systems
  • Aggregation and search of operational data for meaningful security information across a value stream
  • Traceability of running services to the repositories, build and team that produced them
  • Enabling engineering teams to remediate security defects
  • Updating network, host, container or application-layer configuration through orchestration
  • Automatically invalidating and rotating sensitive assets within a deployment
  • Automatic fail-over/rollback to working assets or known-good working configuration/build

This is the reality around which organisations build and/or consume software. Over the years we’ve witnessed the use and expansion of automation in the integration of tools such as GitLab for version control, Jenkins for continuous integration (CI), Jira for defect tracking and Docker for container integration within toolchains. These tools work together to create a cohesive automated environment that is designed to allow organisations to focus on delivering higher quality innovation faster to the market.

Through BSIMM iterations we’ve seen that organisations have realised there’s merit in applying and sharing the value of automation by incorporating security principles at appropriate security touchpoints in the software development life cycle (SDLC), shifting the security effort ‘left’. This creates shorter feedback loops and decreases friction, which allows engineers to detect and fix security and compliance issues faster and more naturally as part of software development workflows.

More recently, a ‘shift everywhere’ movement has been observed through the BSIMM as a graduation from ‘shift left’ – meaning firms are not just testing early in development but conducting security activity as soon as possible with the highest fidelity as soon as is practical. As development speeds and deployment frequencies intensify, security testing must compliment these multifaceted dynamic workflows. If organisations want to avoid compromising security and time to market delays, directly integrating security testing is essential.

Since organisations’ time to innovate continues to accelerate, firms must not abdicate their security and risk mitigation responsibilities. Managed security testing provides and delivers the key people, process and technology considerations that help firms maintain the desired pace of innovation, securely.

In fact, the right managed security testing solutions will provide the ability to invert the relationship between automation and humans, where the humans powering the managed service act out-of-band to deliver high-quality input in an otherwise machine-driven process, rather than the legacy view in which automation augments and/or complements human process.

It also affords organisations the application security testing flexibility required while driving fiscal responsibility. Organisation gain access to the brightest minds in the cybersecurity field when you need them and not paying for them when you don’t; you simply draw on them as needed to address current resource testing constraints. This results in unrivaled transparency, flexibility and quality at a predictable cost plus provides the data required to remediate risks efficiently and effectively.

Enact an open source management strategy

And we must not neglect the use of open source software (OSS) – a substantial building block of most, if not all modern software. Its use is persistently growing and it provides would-be attackers with a relatively low-cost vector to launch attacks on a broad range of entities that comprise the global technology supply chain.

Open source code provides the foundation of nearly every software application in use today across almost every industry. As a result, the need to identify, track and manage open source components and libraries has increased exponentially. License identification, processes to patch known vulnerabilities and policies to address outdated and unsupported open source packages are all necessary for responsible open source use. The use of open source isn’t the issue, especially since ‘reuse’ is a software engineering best practice; it’s the use of unpatched OSS that puts organisations at risk.

The 2020 Open Source Security and Risk Analysis (OSSRA) report contains some concerning statistics. Unfortunately, the time it takes organisations to mitigate known vulnerabilities is still unacceptably high. For example, six years after initial public disclosure, 2020 was the first year the Heartbleed vulnerability was not found in any of the audited commercial software that forms the basis of the OSSRA report.

Notably, 91% of the codebases examined contained components that were more than four years out of date or had no development activity in the last two years, exposing those components to a higher risk of vulnerabilities and exploits. Furthermore, the average age of vulnerabilities found in the audited codebases was a little less than 4½ years. The percentage of vulnerabilities older than 10 years was 19% and the oldest vulnerability was 22 years old. It is clear that we (as open source users) are doing a less than optimal job in defending ourselves against open source enabled cyberattacks.

To put this in a bit more context, 99% of the code bases analysed for the report contained open source software, of those, 75% contained at least one vulnerability and 49% contained high-risk vulnerabilities.

If you’re going to mitigate security risk in your open source codebase, you first have to know what software you’re using and what exploits could impact its vulnerabilities. One increasingly popular way to get such visibility is to obtain a comprehensive bill of materials from your suppliers (sometimes referred to as a ‘build list’ or a ‘software bill of materials’ or ‘SBOM’). The SBOM should contain not only all open source components but also the versions used, the download locations for each project and all dependencies, the libraries to which the code calls and the libraries to which those dependencies link.

Modern applications consistently contain a wealth of open source components with possible security, licensing and code quality issues. At some point, as that open source component ages and decays (with newly discovered vulnerabilities in the code base), it’s almost certainly going to break – or otherwise open a codebase to exploit. Without policies in place to address the risks that legacy open source can create, organisations open themselves up to the possibility of issues in their cyber assets that are 100% dependent on software.

Organisations  need clearly communicated processes and policies to manage open source components and libraries; to evaluate and mitigate their open source quality, security and license risks; and to continuously monitor for vulnerabilities, upgrades and the overall health of the open source codebase. Clear policies covering introduction and documentation of new open source components can help to ensure control over what enters the codebase and that it complies with company policies.

There’s no finish line when it comes to securing the software and applications that power your business. But it is critically important to manage and monitor your assets as well as to have a clear view into your software supply chain. No matter the size of your organisation, the industry in which you conduct business, the maturity of your security programme or budget at hand, there are strategies you can enact today to progress your programme and protect your organisational data and that of your customers.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive