Data residency: Creating a frictionless experience

Data residency: Creating a frictionless experience

Avoiding unnecessary friction when it comes to data residency is a high priority for business leaders. Darragh Curran, CTO, Intercom, offers some top tips on the best approaches to turn what might seem like a complex and risky infrastructure project, to something much more manageable.

For large enterprise businesses with global size and scale, privacy issues and data security are constantly top of mind. Even with a strong focus on building customer trust and a goldmine of industry certifications, considerations about data encryption, authentication and permissions tend to keep the big players up at night.

A related trending topic in this space is data hosting – it’s a huge consideration for many vendors who service larger customers. It’s natural for businesses to question where their data is being stored and to want to feel comfortable with data hosting and handling policies and controls. Whether it’s influenced by industry norms, local regulatory guidelines or simply internal company policies, the question of where customers’ data is stored should never be an issue for the customer.

But the problem with localised data hosting is building out the needed infrastructure. It doesn’t work to just have one localised hosting location — to satisfy large, global enterprises, you need to create the systems and frameworks that can build out multiple locations. This can be a friction-filled process.

It’s simple to host data in one place. Most companies start with the assumption that hosting from one place will be their standard and it becomes deeply ingrained into their architecture. Unwinding from this is complex and time-consuming and it hinders your company’s ability to move fast. Deployment also becomes harder and engineers tend to feel the brunt of the pain.

There are a few pitfalls to watch out for here. When faced with a project that impacts their core infrastructure, companies may be too quick to abandon the day-to-day principles and approaches that have guided their engineers to success in the past. They may also incorrectly see the work as an opportunity to introduce new technologies, which are actually not required for solving the original customer problem. Finally, a failure to embrace automation at every step of the way introduces risk and adds to implementation time.

Here is what I advise based on my experience leading engineering at Intercom over the last 10 years:

Adopt a principled approach

Principles are a way of encoding successes, helping to repeat the behaviours that led to positive outcomes and avoid previous behaviours that led to mistakes. It can be easy to think such principles are only for software engineers building customer-facing products, and can’t be applied to a complex infrastructure project. On the contrary, it’s during the most complex and longest running projects that the value of a principle-based approach can shine. You will likely know what you need to build, but you should also consider the how. What are your principles when it comes to how you manage risk, for example, or how will you approach technical trade-offs and make key decisions?

Know the ways of working that your engineers are used to and what has brought success in the past. Now is not the time to abandon them, but to prove them out, to sharpen your ways of thinking and working. The purpose of engineering principles and process is to help teams build high-quality solutions as efficiently as possible.

Take for example our principle of ‘Think big, start small, learn fast’. We were determined to move quickly by taking small steps with no time wasted upfront creating unwieldy Gantt charts, or debating effort estimates. We bias for action. Following this principle helped us complete bite-sized chunks of work that quickly uncovered areas of previously unknown risk, embracing each small ‘failure’ along the way as an opportunity to learn and adapt our methods.

Stick to the technologies you know well

Fundamental changes to your technical data architecture are fraught with risk. When embarking on a new large-scale project, now is not the time to switch to a new stack or to change build patterns that have served you well in the past. Not only can this introduce new implementation risk, but once live, you will end up operating a technology or platform in which you have limited skills and experience to support. Go for stability: thinking about the work as an evolution of the architecture you have today, rather than something more revolutionary. Architecture choices can have fundamental impact on your velocity and companies with large and disparate technology stacks may struggle to make fast progress. Being highly opinionated about the type and size of tech stack you choose to run leads to simplified implementations across all your projects.

It’s beneficial to be technically conservative, sticking to familiar solutions like Amazon Web Services where you can build up deep expertise in its services and technologies over time. This allows you to keep things simple and avoid introducing additional complexity where it’s not needed. Instead of harnessing a new technology, or trying to build complex cross-region data infrastructure, think about how to simply build a near exact replica of an existing regions production environment in your new region.

Embrace automation

Hopefully you’ll have made investments in automation in the past and this is where it will pay off. Infrastructure as Code platforms allow you to ‘blueprint’ infrastructure configuration within your network and across server fleets and database clusters, which can then be used to automatically provision new infrastructure in a new location. This approach helps ensure you are not starting from scratch when it comes to configuration, accelerating you through the build phase, while also ensuring that your existing security controls are applied consistently to the new jurisdiction. This is essential because although speed is impressive, you can’t compromise security.

There are governance benefits here too. All infrastructure changes are documented as peer reviewed pull requests, with the resulting state captured in auditable files.

Our own commitment to automation forms part of Intercom’s ‘Zero Touch ops’ philosophy. It’s an approach that values systemic investment in infrastructure automation, with no tolerance for manual systems management or configuration. It’s part of what helps run lean infrastructure organisations with outsized capabilities, and the ability to move fast and adapt. The impact of this approach is visible elsewhere in the critical systems our own infrastructure teams run; our continuous delivery pipeline that gets code from a developer’s laptop runs thousands of tests and deploys it live in our production environment and into the hands of our customers within 12 minutes.

Companies who adopt these approaches will find that a seemingly complex and risky infrastructure project can start to appear much more manageable.

Click below to share this article

Browse our latest issue

Intelligent CIO North America

View Magazine Archive