OUR THOUGHTSAI

Questions leaders must ask during AI implementation

Posted by Martin Kearns, Daniel Walters, Gareth Evans . Jun 18.25

We are regularly reminded by technology experts that AI will transform business operations, reduce costs and create competitive advantages. We agree these opportunities exist, but this blog isn’t just another article promoting AI adoption.

We explore what doesn’t get the airtime it deserves – the critical questions leaders must ask before diving headfirst into AI initiatives and how to approach AI implementation in a mature way that delivers genuine business value.

The reality is that AI initiatives are fundamentally different from traditional software development approaches. You cannot simply apply existing project management methods and expect success. Understanding this difference and asking the right questions upfront will determine whether your AI initiatives become strategic assets or expensive lessons.

Strategic investment and business alignment

The most dangerous assumption leaders make is believing they can write a comprehensive business case for AI upfront. This approach is fundamentally flawed because you’re essentially claiming confidence that an algorithm will align perfectly to your specific situation before it has been tested. AI solutions require incremental funding rather than large upfront commitments to allow for experimentation and learning.

The nature of AI solutions also demands a different approach to maintenance and monitoring. Unlike traditional build-and-run scenarios, AI systems require ongoing attention due to concepts like drift and the potential for algorithmic hallucinations.

If you’re outsourcing the monitoring, maintenance and ethical oversight of these systems, you’re taking significant risks if something goes wrong. The question leaders must ask is whether they have the internal capability to maintain responsibility for AI decisions, even when using third-party solutions.

Smart leaders approach AI investment through what can be called a ‘pay to learn’ approach, where initial investments represent no more than five percent of the total budget. This ensures that if the pilot fails, 95 percent of resources remain available for alternative approaches. Once you exceed 10 percent of budget on unproven AI concepts, stakeholder confidence typically erodes rapidly.

Why organisational resilience and adaptability is important

AI implementation requires organisations to embrace a level of empirical learning that many find uncomfortable. Traditional business cultures celebrate green project status reports and successful delivery against predetermined specifications. However, AI projects often require leaders to acknowledge and even celebrate failures as valuable learning experiences. This cultural shift represents perhaps the biggest challenge in AI adoption.

The most successful AI implementations occur within what can be described as accelerated innovation containers, where the right people have dedicated time and immediate access to decision-makers. If your AI initiative is competing for calendar time with regular business operations, you’re undermining its potential from the start. The accelerated learning loops required for AI success demand instant feedback and rapid iteration, which cannot happen when key stakeholders are unavailable for weeks at a time.

Organisations must also develop comfort with hypothesis-driven approaches rather than requirements-driven projects. This means learning to set meaningful hypotheses and being willing to call experiments red when they’re not working, rather than maintaining artificial green status reports. The ability to pivot quickly based on empirical evidence becomes a core organisational capability, not just a status report.

The lifecycle of AI solutions moves through stages much faster than traditional software, with AI solutions potentially going through multiple evolution cycles during a short time period. This compression of the product lifecycle means organisations need new capabilities around rapid prototyping, continuous testing and frequent decommissioning of solutions that no longer provide value.

AI governance

Effective AI governance starts with the recognition that there isn’t an endless supply of money for AI initiatives despite the current enthusiasm surrounding the technology. At some point, leaders must compare AI investments against one another and make portfolio-style decisions about where to allocate resources for the best probability of return.

This portfolio approach requires new frameworks for assessing when to allow an AI solution to be deprecated or stopped entirely because it’s not delivering promised returns. Traditional software might continue providing value for years, whereas an AI solution may need replacement within months as better alternatives emerge or as business requirements evolve beyond the solution’s capabilities. This reality demands governance frameworks that can handle faster asset depreciation and more frequent decisions about continuing or discontinuing AI initiatives.

The governance challenge is complicated by the fact that AI spending doesn’t necessarily flow through traditional IT budgets. Business units can invest in AI-embedded products and services outside of central technology oversight, creating potential for significant uncontrolled spending across the organisation. Leaders need visibility into all AI-related investments, regardless of which budget they originate from.

The governance model must also account for the reality that every product and service is trying to incorporate AI features. This trend raises questions about when products lose their original value proposition because they’ve been modified with so many AI capabilities that their core identity becomes unclear.

Ethical considerations

The fear-mongering around AI replacing human workers has reached problematic levels, particularly on social media platforms where dramatic predictions about workforce displacement generate engagement but ignore economic realities. The logic that half the workforce could become redundant fails to account for the simple fact that unemployed people cannot purchase goods and services, which would ultimately make much of the remaining workforce redundant as well.

A more constructive approach focuses on how AI can eliminate tedious work and create space for human creativity and innovation. Imagine AI making code refactoring as automated as a code compilation, allowing developers to focus on solving complex problems rather than managing technical debt. This vision of AI as a productivity multiplier rather than a replacement represents a more realistic and beneficial outcome to society.

However, the ethical concerns around AI are real and significant. Generative AI systems lack consciousness, yet make decisions based on pattern recognition that could lead organisations into serious trouble. Leaders must ask themselves whether they’re prepared for scenarios where AI recommendations drive customer behaviour in potentially harmful ways, such as encouraging addictive purchasing patterns or accidentally targeting vulnerable customers.

The gambling industry provides a cautionary example of how algorithmic optimisation can create harmful outcomes that eventually require extensive regulatory intervention. Advertisement restrictions, spending limits based on affordable income and other regulatory requirements now constrain gambling operators because the industry initially prioritised revenue growth over ethical considerations.

Leaders implementing AI must consider whether they’re creating similar risks in their own domains. A retailer using AI to increase shopping basket sizes must ask whether those recommendations might contribute to unhealthy spending behaviours, particularly during late-night hours when impulse control is typically lower.

The solution isn’t to avoid AI entirely but to implement it thoughtfully with consideration for long-term consequences. This means establishing ethical guidelines before problems occur, rather than reacting to regulatory requirements after public incidents force government intervention. The organisations that proactively address ethical considerations will likely avoid regulatory problems when others fail to self-regulate effectively.

Smart leaders approach AI as a tool that should make them better versions of themselves rather than as a replacement for human judgment and creativity. When implemented with proper governance, ethical considerations and realistic expectations about both capabilities and limitations, AI can indeed transform business operations while preserving the human elements that drive innovation and create genuine value for customers.

Martin Kearns

Martin Kearns

An Associate Partner of HYPR, Martin thrives on challenging traditional working norms to align organisations with the ever-evolving demands of today's and future economies.

More Ideas

our thoughts

Evaluating strategic AI bets

Posted by Gareth Evans . Jun 12.25

Organisations are increasingly making strategic product bets on Artificial Intelligence, representing both tremendous opportunity and significant risk.

> Read

our thoughts

Do YAGNI, KISS and DRY always provide better engineering and product development?

Posted by Davin Ryan . May 19.25

We are regularly reminded by content creators and experts that YAGNI, KISS and DRY reduce ‘dumb work’, are ‘need to know’ and ‘key software design principles’ to get the best out of your software engineering teams leading to better business outcomes.

> Read

our thoughts

Empowering developers with Containerised Development Workflows

Posted by Blair Jacobs . May 12.25

When HYPR works with a customer on their software systems, our mission is always to deliver the most value possible. In most organisations, we observe a range of challenges that developers face that make it difficult to provide value at pace.

> Read

our thoughts

How software companies stay relevant and responsive during organisational transformation

Posted by Daniel Walters . May 01.25

There often comes a time when business leaders look at how they are competing in the marketplace or how things are operating within and start to form a view that something needs to change. Even more often, when there is a new leader, they will form the view that something needs to change. These changes are usually structural and disruptive.

> Read

our thoughts

Lifting the lid: understanding what makes development teams thrive

Posted by Tony Luisi . Apr 21.25

Development teams are the engine of innovation in today’s digital business environment. While organisations increasingly demand more features from their development teams, few potentially understand what happens beneath the surface.

> Read