OUR THOUGHTSTechnology
Improving security with flow engineering
Posted by Gareth Evans . Nov 25.24
How do you improve security while delivering valuable software at speed? Traditionally, security and speed have been viewed as opposing forces, creating a false dichotomy that has hindered both objectives. However, balancing defence and response can enhance security. With the right engineering practices in place, accelerating the flow of value improves security rather than compromising it.
The flow advantage in security
The traditional approach to security emphasised caution and careful processes that favoured gates and controls and inspecting in security, often late in the delivery process and across large batches of work. This approach has become increasingly ineffective in today’s threat landscape. The sophistication of attacks has evolved dramatically, with specialist hacker teams finding vulnerabilities and writing complex exploits. To stay safe, companies must be able to deliver faster to outpace the development of attacks. Engineering practices also need to evolve, with engineering taking more responsibility for security by automating security checks and improving architectural patterns that reduce the impact of exploits.
Flow itself becomes a form of defence. More frequent releases create a fast-moving target, which is harder to hit. Systems that can rapidly detect, respond and adapt to threats are inherently more secure. This requires organisations to think differently about their security approach, moving from slow, cautious releases to frequent, safe, highly-automated releases that support fast response and adaptation to threats.
Understanding your organisation’s flow distribution helps you understand the balance of different types of work, including risk and debt work which can both directly improve security. Categorising work into four types – features, defects, risks and debt allows organisations to understand their current investment balance. An appropriate balance ensures enough capacity is allocated to debt work which leads to continuous modernisation of architecture, including security patterns. Risk flow items are things that don’t produce direct product value but, without them, could lead to lawsuits, reduced customer trust or higher business risk. Risk items might include MFA enforcement for cloud infrastructure or improving software supply chain management.
Developer experience
In most organisations, there are typically many more engineers than security team members. For security teams to be successful, this means encouraging engineers to adopt good security practices as code is written. Left shifting security – when done well – helps integrate security practices earlier in the development lifecycle, leveraging automation wherever possible.
For example, Docker containers can fundamentally improve developer experience, improving both flow and security. Developer experience is not about the runtime security benefits of Docker, such as process isolation. It is about how the delivery ecosystem can enable a team to experiment, learn and innovate faster. A team working on a software system can compose all the infrastructure they need to provide fast feedback as they work. Mocked or containerised Identity and Access Management (IAM) helps engineers safely experiment and learn about OAuth flows. A containerised workflow composed of a number of emulated supporting services makes it easier to write automated security tests which can execute locally before running in CI pipelines.
A great developer experience reduces the blast radius for security experiments, starting with a ‘local’ machine before safely extending to cloud environments where other team members, the broader organisation or even your customers may be affected.
Security and CI
Static Application Security Testing (SAST) is an import mechanism that can be incorporated into build pipelines. Modern SAST tools can analyse source code files to identify security vulnerabilities, bugs and code smells early in the development process. A word of caution here though – all source code analysis should be configured to help catch useful things and not create noise which teams address without adding value.
Effective SAST implementation includes integration into developer IDEs for immediate feedback and automated scanning in CI/CD pipelines for high-risk security issues such as open-source library vulnerabilities or breaches of security standards. Quality gates then prevent insecure code from reaching production.
Infrastructure as Code (IaC) is essential for maintaining security at scale. Tools like Terraform enable organisations to codify security controls and ensure consistent implementation across their infrastructure from source code repository creation to infrastructure deployment and observability.
Key security practices for IaC may include creating reusable modules with embedded security controls, implementing policy as code to enforce security guardrails, automating security validation during provisioning, continuous monitoring for configuration drift and maintaining audit trails of infrastructure changes.
API security risks and mitigation
The OWASP Top 10 API Security Risks highlight critical areas where practices that accelerate flow can help improve security too. For example, the top three OWASP API risks focus on API authorisation on each request, preventing attackers from accessing data or performing actions on behalf of another user. Eliminating OWASP API risks helps reduce data breaches and identity theft.
Good architectural patterns need to be in place, such as Attribute Based Access Control (ABAC) for each API resource and de-referencing of security tokens (often called JWTs) per request rather than trusting external claims within tokens.
Practices that accelerate flow, such as layered testing, can be applied as APIs are developed to ensure security does not regress. The auth for each new API resource is tested on each commit to source control. Good architectural patterns and engineering practices go hand-in-hand with improving security and flow. So why not embrace the benefits of both outcomes?
Organisations need to maintain a healthy investment in debt and risk work to continuously improve their security posture.
Zero Trust architecture principles
Zero Trust architecture principles align well with the goals of accelerated flow and enhanced security. Zero Trust principles, including continuous verification, support frequent deployment, while maintaining security through strong identity controls. This approach moves security from a gate-keeping model to a continuous verification process that enables, rather than inhibits flow.
The principle of least privilege access helps create secure boundaries while enabling frequent deployment and testing. Automated access controls don’t require manual intervention for each deployment or change. Each change executes layered tests to ensure security rules have not regressed.
Continuous monitoring and analytics – another key Zero Trust principle – enables organisations to maintain security and flow. With good observability tools, organisations can detect security issues in real time, helping reduce time to recover potential threats while maintaining high deployment velocity.
Automation and orchestration principles in Zero Trust architectures directly support improved flow by streamlining security processes and reducing manual intervention. Automation of infrastructure, deployment, testing and release ensures consistent security policy enforcement throughout deployment pipelines and is especially important when systems become more distributed with more moving parts.
Enabling security knowledge
Security teams often face a common challenge – effectively spreading security knowledge and best practices across an organisation without becoming a bottleneck. Team Topologies offers a useful framework for addressing this challenge through the use of enabling teams and platform teams.
- Enabling teams are temporary teams that teach and upskill other teams until they can work independently. They don’t build features but instead focus on sharing knowledge and helping teams overcome challenges.
- Platform teams, on the other hand, are permanent teams that build and maintain internal tools and services used by other teams. They treat their platform as a product, focusing on making it easy for other teams to build and deploy software efficiently.
The key distinction is that enabling teams teach others how to solve problems, while platform teams provide ready-to-use solutions that reduce the cognitive load on other teams.
A team topologies approach to spreading security knowledge may start with security specialists acting as an enabling team focusing on upskilling other teams around a part problem, such as OWASP API security patterns. In this role, they work closely with stream-aligned teams through facilitation and hands-on guidance. Security specialists might pair with developers on security-critical features, offering immediate feedback and knowledge transfer in the context of the work. They might guide teams through threat modelling sessions, teaching them to think critically about potential vulnerabilities as they deliver. Through this facilitation of knowledge transfer, they introduce secure coding patterns that teams can apply to their specific domains. Teams learn security principles gradually, in the context of their actual work, making the knowledge more likely to stick and be applied effectively.
As security patterns mature and prove their value across multiple teams, some may naturally evolve into platform capabilities. Authentication and authorisation serve as a prime example of this evolution. What often starts as guidance on proper auth implementation can develop into a set of secure common libraries, making it easier for stream-aligned teams to integrate with an IAM service using a standardised SDK. Similarly, initial manual processes for handling secrets can transform into automated secure vault services with clear APIs.
The journey from manual security code reviews to automated security scanning illustrates another path. What begins as periodic security specialists’ reviews can evolve into automated security scanning pipelines integrated directly into the deployment pipelines.
As security capabilities move into the platform layer, the interaction model shifts from collaboration to X-as-a-Service (XaaS). This evolution reduces cognitive load while ensuring consistent security practices across teams over time and fundamentally changes how teams interact with security requirements. Complex security implementations become packaged into consumable APIs, significantly reducing the cognitive load on stream-aligned teams. Teams can adopt security best practices through self-service mechanisms while standardising security implementations across teams becomes natural rather than forced.
This shift frees up security specialists to focus on new areas of improvement, creating capacity for ongoing security innovation. The platform team can focus on maintaining and improving security services while the enabling team continues to facilitate new security patterns and practices into teams.
Transitioning from enabling team interactions to platform XaaS features can be problematic if undertaken too soon or without treating stream-aligned teams as customers of the platform. Consider XaaS security capabilities when the pattern has become well-understood and stable through multiple successful team adoptions. The cognitive load of manual implementation should still be high enough to justify automation and teams should be regularly requesting similar guidance or implementation support if the XaaS does not exist.
Even with security-as-a-service patterns in place, the enabling team role remains important to organisational security. Security specialists can continue to identify new patterns and facilitate their introduction into teams. Engaging directly with stream-aligned teams helps understand their challenges and needs and may help identify new areas of improvement.
Embracing flow as a security enabler
The false dichotomy between security and speed in software delivery is giving way to a more nuanced understanding – one where flow engineering and security can reinforce each other. By adopting modern engineering practices, automated security controls and evolving team structures, organisations can achieve both objectives more effectively than ever before.
Flow becomes a security advantage through frequent, automated releases, while improved developer experience enables safer experimentation and innovation. The implementation of Zero Trust principles supports both security and deployment velocity, creating a foundation for frequent, secure delivery.
Team Topologies patterns provide a framework for effectively spreading security knowledge across product engineering teams, ensuring that security becomes everyone’s responsibility, supported by highly-skilled specialist security teams.
As security threats continue to evolve, organisations that can deliver secure software quickly will have a significant advantage. This requires rethinking the relationship between security and delivery speed, treating them not as competing forces but as complementary capabilities that strengthen each other. Through the application of modern engineering practices, team structures and platform capabilities, organisations can create systems where improved flow enhances security and better security enables faster flow.
The future of software delivery is not a choice between security and speed, but in recognising that you cannot have one without the other. Organisations that embrace this reality and invest in the patterns, practices, tools and culture to support both will be better positioned to deliver value to their customers while maintaining security.
More Ideas
our thoughts
How to build an AI code copilot that responds with your own code
Posted by Tony Luisi . Dec 16.24
AI coding assistants and copilots continue to evolve and become more useful, reshaping the industry and changing the skills developers need to be productive. While these tools can be helpful, they also currently come with a significant drawback – they’re trained on a vast number of repositories of average code. The code they suggest reflects the quality of all the code they’ve been trained on, including excellent and problematic patterns.
> Readour thoughts
CEOs, software funding and budget mechanisms could damage your investments
Posted by Daniel Walters . Dec 12.24
I’ve worked with many executives who have great intentions and aspirations for their companies. They’re frustrated that their energy and investment in software development are not meeting those aspirations. Some find a path to success, but many fail to produce software that meets the expectations to compete.
> Readour thoughts
Seven tips to improve flow – inspired by your holiday travels
Posted by Steven Gibson . Nov 26.24
As summer approaches, it’s time to switch off laptops, pack bags and head to your favourite beach or holiday spot. Whether you’re off to the Coromandel, Bay of Islands, the Gold Coast or beyond, the journey itself offers some valuable parallels to how work flows through your organisation.
> Readour thoughts
CEOs, is your culture sabotaging software quality?
Posted by Daniel Walters . Nov 07.24
When I speak with CEOs, they often feel frustrated by their teams’ perceived lack of pace and urgency. They hear their customers’ expectations, their sales teams’ calls for new things to talk about and their competitors breathing down their necks. From their perspective, the product development teams are falling short of expectations.
> Readour thoughts
Continuous modernisation vs legacy displacement
Posted by Gareth Evans . Nov 05.24
Many organisations face a critical challenge: how to modernise legacy systems that constrain their ability to deliver value towards friction-free architectures that enable faster response to market demands.
> Read