
OUR THOUGHTSTechnology
Empowering developers with Containerised Development Workflows
Posted by Blair Jacobs . May 12.25
When HYPR works with a customer on their software systems, our mission is always to deliver the most value possible. In most organisations, we observe a range of challenges that developers face that make it difficult to provide value at pace.
Critical to setting up ourselves and the teams we collaborate with for success is creating an environment where we can iterate quickly and receive fast feedback from tests, allowing us to move quickly with confidence. We call this a ‘Containerised Development Workflow’ because it uses Docker containers to help us develop in isolation on our local machines while simulating production-like environments.
This removes bottlenecks such as waiting for test resources to be provisioned and access to be granted; typical organisational IT and network policy need not apply as these resources run locally in an isolated, sandboxed environment. This environment is extremely flexible, reliably repeatable as a form of infrastructure as code.
In this post, we describe the motivations, approaches and benefits we observed from adopting containerised development workflows, in the hope that others may adopt and benefit from applying this strategy.
Motivation
For some time, we have focused on path to production and left-shifting quality practices so that they happen early and often.
Containerised Workflows facilitates a left shift by creating environments where developers can do more on their local machines, breaking down impediments to progress and freeing developers to move rapidly and innovate.
This approach fosters:
- Happy, productive developers
- High-performing teams
- A culture of innovation and success
For the business, this translates to:
- Increased staff retention
- Preservation of institutional knowledge
- Accelerated innovation and output
- Attraction of top-tier talent
All these factors contribute to a high-performance organisation.
Empowering developers to do more
We strive to enable developers by:
- Minimising handoffs between teams
- Providing powerful tools that promote T-shaped skillsets
- Creating fast, frictionless feedback loops
The goal is a seamless development experience where creativity and innovation thrive. Too often, developers are bogged down by organisational policies that hinder productivity, cause frustration, and increase turnover.
Developers do what they do because it’s fun and creative, so let’s keep it that way.
Equipping developers with robust local environments allows them to experiment safely and get immediate feedback without unnecessary blockers or toil.
Local development environment as code with Docker Compose
Setting up any development environment can be tedious and error-prone. A developer’s computer is a multi-purpose tool running many applications and is governed by organisational policies. This makes it very difficult to create a consistent environment for running and testing code, especially when, for example, tests require specific software to be installed and network configurations.
With this setup, developers manage to compile and unit test their code reliably. However, even this can be challenging. As we move up the testing pyramid to try and run integration tests that interact with other applications and networks, this becomes increasingly difficult and non-repeatable. Often, we see patterns like a shared database or OAuth services being used for development. This is not ideal as:
- Services are subject to company security policies and practices
- Developers are not free to experiment without disrupting others
- State is persistent which affects behaviour and tests are not reliably repeatable
Overall, these factors add complexity, raising the impedance to testing.
To overcome these issues, we began experimenting with Docker as a development environment and applying the concept of infrastructure as code (IaC) to our local development environments.
Our first iteration of a containerised workflow was a single container that offered a consistent environment for compiling code and running unit tests, independent of the host operating system and any policy restrictions that the organisation may apply to it.
Next, we began introducing database containers so that we could integrate with various databases. Over time, we developed more complicated environments, which included tools like OAuth Identity providers, Nginx proxies, OPA policy engine and Kafka. Later, we will discuss how we introduced AWS resources into these environments.
Typically, as you move up the testing pyramid, tests become more challenging to set up, brittle and prone to failure because they interact with more external dependencies that must be configured for the test. We found, however, that we were able to configure these environments reliably through the use of Docker Compose. This is because Docker runs in an isolated, sandboxed environment and starts with exactly the same state each time, making it reliably repeatable and greatly reducing the complexity of any test setup.
A Containerised Development Environment has the following positive characteristics:
- Ephemeral: Can be created and destroyed as needed
- Stateless and repeatable: When restarted, all test state is lost, reverting back to the exact same initial state
- Private: Resources are not shared; they are created on the developer machine in a sandboxed environment with its own internal network and are not accessible externally by default
- Configurable: An environment can be defined via a simple text file with a rich set of containers available (see Docker Hub).
- Same command set: The same commands are used to start any environment as environment definitions are encapsulated in the configuration
With this tooling in place, we could run our integration tests just as easily and reliably as unit tests. This resulted in a massive increase in accuracy, to the point where we expect our code to work without issue the first time it’s deployed!
Having such a powerful local environment is also a great learning tool. Having your own private ephemeral environment, which can instantly be reset if broken, means you can experiment as much as you like. We can experiment with catastrophic failure modes, refine logging and capture these experiments as tests.
These environments are easily reproducible and can be run in a Continuous Integration (CI) pipeline.
Standardised micro format
We adopted a simple set of well-known commands for starting any environment and running its tests. This is possible because the complexity of setting up the environment is encapsulated in the configuration. The goal is for anyone with the source code to get up and running with tests passing effortlessly. We know this process will work reliably as the CI pipeline continually runs the exact same process. This breaks down barriers to onboarding and reduces toil and frustration.
Test Driven Development
This tooling encourages a test-driven approach. This is because the barriers to testing are broken down. As a result the easiest way to experiment with third-party systems in a containerised environment is by writing tests to interact with them. This fosters experimentation through tests, which are committed and run in the CI pipeline, making them valuable. Contrast this with ad hoc testing where too often test results are never accurately recorded or, at best, buried in documentation.
Continuous Integration (CI)
When containerised environments are used for local development, these exact same containers are run in the CI pipeline to build and test code. Just as the CI pipeline asserts that our code will work in production, it now also asserts that our local development environment will work. This gives us the confidence that our local development environment will start seamlessly and repeatably for anyone with the code, as this is continually being asserted by the CI pipeline. We all know the saying “practice makes perfect”.
Cloud software, tested locally
In a cloud-first world, local-first development is our edge. Using LocalStack, an AWS service emulator, we replicate real AWS infrastructure locally in a containerised environment.
Integration testing with LocalStack
We test AWS integrations safely and accurately, just as we do other external software, by writing tests that:
- Setup AWS resources
- Interact with those resources locally
- Validating AWS state changes post-test
- Tearing down AWS resources
These fast, isolated test cycles help developers:
- Deepen their understanding of AWS
- Safely experiment with failure modes
- Build confidence in the cloud ecosystem
Deploying and validating AWS infrastructure locally
Our local workflows have evolved beyond single containers to full-stack, cloud-simulated environments. By integrating Terraform into our local toolset, we now:
- Include infrastructure as code in the same repo as application code
- Run Terraform locally to validate changes
- Mirror production deployments in a safe, sandbox, containerised environment
This approach allows us to run Terraform locally and deploy our code into the local AWS environment just as we would in production. We can then perform smoke tests against a live system running locally in AWS. This dramatically shortens feedback loops and boosts deployment accuracy.
These repeatable processes run in CI pipelines, ensuring deployment validation after each commit.
An added bonus of this approach is that when the AWS cloud environment lags the local environment, we can hand over the local environment to our UX developers, allowing them to continue development against real services deployed to a local AWS environment, unblocking them.
Case study: accelerating AWS adoption in a regulated environment
We worked with a company new to AWS, operating under strict regulatory constraints. Their AWS platform and governance model were still under development, yet we faced tight delivery deadlines.
Iterating quickly without waiting
While the AWS environment was being built, we couldn’t afford delays. We needed a way to:
- Rapidly prototype and validate AWS usage
- Provide fast feedback on Terraform modules under development
- Work independently of infrastructure readiness
Our containerised environment allowed us to continue working independently while validating the infrastructure team’s work.
Example: validating naming policies
The Cloud team provided a set of naming policies for AWS resources. As we would expect, these rules evolved as the platform matured.
Using Terraform locally, we could:
- Generate and validate Terraform plans against naming rules
- Quickly update our Terraform code to match the current policies
- Complete feedback cycles in ~30 seconds
- Run on every commit in our CI pipeline
Local iteration time: ~30 seconds. Estimated AWS iteration time: 1+ day.
This is just one example of many where feedback times were drastically improved. Over one year of working this way, the accumulated productivity gains we achieved gave us a massive advantage and enabled us to deliver faster and more confidently, even when we did not have access to an AWS environment.
Using this approach, approximately six months of development was completed by our team before the AWS environment was available. When deploying into this environment, our Terraform and deployments ran successfully the first time because we had repeated this process hundreds, if not thousands, of times locally before going to a live environment.
The evolution of containerised workflows has been a gradual journey that has occurred over several years. We now take these workflows for granted. However, looking back, it’s hard to imagine how we would have been able to deliver value at the rate we do without these tools. Containerised local development environments have significantly transformed how we deliver software and given us a significant edge.
The following features of our local setup have streamlined and enhanced our development workflows:
- Cross-platform compatibility: Docker provides a consistent development environment, regardless of the developer’s operating system
- Repeatable setup: Environments can be launched in a standardised, repeatable manner, ensuring that any codebase can be started and tested in the same way
- Enhanced integration testing: We can now test against real infrastructure, improving test reliability and confidence
- Automated local cloud tests: Cloud-based tests can be run locally in a fully automated, isolated and repeatable way, delivering rapid and safe feedback
- Freedom to experiment: Developers can safely explore and experiment within the environment, helping them better understand the cloud ecosystem
- Test-driven approach: Experiments are captured as tests and run in the CI pipeline
These capabilities have proven invaluable, enabling us to deliver value accurately and efficiently, even in highly regulated and uncertain environments.
This tooling shifts our focus toward precision and continuous learning through fast feedback. Fast feedback is essential to working iteratively, supporting practices such as evolutionary architecture and allows us to rapidly adapt to change. Ultimately, this empowers us to navigate uncertainty and deliver outstanding outcomes for our customers.
More Ideas
our thoughts
Do YAGNI, KISS and DRY always provide better engineering and product development?
Posted by Davin Ryan . May 19.25
We are regularly reminded by content creators and experts that YAGNI, KISS and DRY reduce ‘dumb work’, are ‘need to know’ and ‘key software design principles’ to get the best out of your software engineering teams leading to better business outcomes.
> Readour thoughts
How software companies stay relevant and responsive during organisational transformation
Posted by Daniel Walters . May 01.25
There often comes a time when business leaders look at how they are competing in the marketplace or how things are operating within and start to form a view that something needs to change. Even more often, when there is a new leader, they will form the view that something needs to change. These changes are usually structural and disruptive.
> Readour thoughts
Lifting the lid: understanding what makes development teams thrive
Posted by Tony Luisi . Apr 21.25
Development teams are the engine of innovation in today’s digital business environment. While organisations increasingly demand more features from their development teams, few potentially understand what happens beneath the surface.
> Readour thoughts
The AI coding partner: a generational shift in product engineering
Posted by Gareth Evans . Apr 14.25
We’ve recently witnessed a fundamental change in how software is engineered. Large Language Models (LLMs) are reshaping the landscape of product development in ways that extend beyond code completion and assistance. This shift isn’t just about productivity gains – it’s a new way of conceptualising, designing and building software products.
> Readour thoughts
Flow metrics – build or buy?
Posted by Steven Gibson . Apr 08.25
It’s now well established that including flow metrics is an effective way to identify areas for improvement and visualise how work flows through your system.
> Read