
OUR THOUGHTSAI
Product ops for AI initiatives: moving beyond AI mandates
Posted by Gareth Evans . Jul 07.25
Organisations are increasingly considering incorporating AI into products and services, driven by the desire to stay ahead of technology and not be left behind their competitors.
Yet as Artificial Intelligence initiatives proliferate, a troubling pattern emerges: Teams are being handed technology solutions without clear goals or strategies, leaving them to navigate complex challenges without the guidance required for success.
The current AI landscape: mandates without strategy
AI tools and capabilities are driving unprecedented technological advancement across organisations today. Teams increasingly receive mandates to leverage AI initiatives or build AI features, often without understanding the underlying rationale or what the technology can deliver. This phenomenon isn’t limited to the corporate world; government agencies are signing expensive contracts with major tech providers and trying to get returns on their investments by pushing teams to incorporate AI mandates into their services.
The pressure comes from multiple directions. There’s the fear of being left behind if competitors move faster with strategic AI adoption and AI product development. There’s peer pressure from success stories shared at conferences and in industry publications. And there’s the allure of perceived productivity gains from using generative AI tools in product teams.
Over the last decade, many product engineering teams have learned that having a customer-centric product strategy and understanding which customer problems to solve are critical for business success. Continuous discovery, rapid experimentation to validate hypotheses and iterative solution testing with data-driven decision-making ensure this customer focus.
When organisations start with a predetermined solution – AI – rather than a well-understood problem, this carefully developed approach falls under the pressure of AI use without clear outcomes.
How teams are struggling with AI mandates
The challenges teams face when handed AI mandates are multifaceted. The most fundamental issue is a lack of clear goals aligned with strategy. When leaders declare that teams must use AI initiatives without explaining what they hope to achieve, teams cannot determine whether they’re being successful. This creates a cascade of problems that undermine effective AI product development.
Teams are trying to build a strategy around a technology push rather than customer needs and business outcomes. They struggle to set up strategic goals and progress-tracking mechanisms when the desired outcomes remain unclear. The frameworks that product teams use, whether OKRs or other goal-setting approaches, become difficult to apply when the starting point is a solution rather than a validated problem.
Perhaps most concerning is how AI mandates undermine team autonomy. Product teams that have grown accustomed to prioritising work based on customer problems and business value suddenly find themselves forced to prioritise mandated AI projects over work already identified as necessary. Organisations that shift back toward a ‘feature factory’ mentality remove the agency that attracts talented people to product development roles where their inspiration and innovation is required for success.
The technical challenges with AI initiatives also need to be considered. Unlike traditional software systems with more deterministic properties, generative AI systems are inherently non-deterministic and promise to be ‘everything machines’ capable of handling any task. This makes traditional testing and quality assurance approaches inadequate. Teams must develop new frameworks for continuously validating systems not designed to produce predictable outputs.
A better approach: applying product development practices to AI
The solution isn’t to abandon AI initiatives but to apply the same rigorous product management AI integration practices that teams have refined over years of experience. This means starting with the fundamental question of ‘why?’. What specific goals are we trying to achieve and what customer problems are we solving?
When faced with an AI mandate, the most productive first step is to uncover the fundamental objectives behind the directive. The stated goal is often improved productivity or efficiency, but the underlying motivation may be reducing cost or staying competitive. Once teams understand the actual goals, they can frame strategic objectives around them and establish success metrics to track progress.
This approach also opens up the solution space. Instead of being locked into using AI, teams can evaluate multiple approaches to achieving the desired outcomes. They can apply their usual assessment practices to determine which solution, whether AI-based or not, best serves both customer needs and business objectives.
AI initiatives make the traditional product development framework of assessing feasibility, viability and desirability even more critical. For feasibility, teams need to apply existing AI principles that many organisations have already developed, deal with non-deterministic outcomes within legal and compliance frameworks and understand that building custom AI systems requires significant time, resources and specialised capabilities.
Viability considerations include conducting proper due diligence on third-party AI providers, especially those that lack proven business models, increasing the risk to service continuity and cost management. Teams must also factor in the environmental costs of AI systems, which are increasingly significant and can impact an organisation’s sustainability goals.
From a desirability perspective, teams need to consider whether customers actually want AI-powered interactions in the context of the service. The assumption that customers prefer faster responses regardless of source may prove false, particularly in customer support scenarios where people prefer human interaction.
Addressing ethical considerations in AI projects
AI initiatives have ethical considerations that extend beyond traditional software development concerns. Environmental impact represents one of the most significant challenges, as training and operating large language models require enormous energy and water. Organisations can address this by measuring and accounting for these environmental costs in their risk assessments, treating them as both monetary expenses and factors in solution selection.
The risk of bias in AI-generated content requires teams to scrutinise outputs more carefully, which can actually increase awareness of bias issues and help uncover problems in training data. Organisations should consider open-source models over proprietary ones to reduce legal risks associated with data sovereignty and security. They should also evaluate providers based on their labour practices for data labelling work.
Geopolitical considerations also play a role as organisations may want to avoid providers funded by investors from countries concerning human rights records. These ethical factors shouldn’t be afterthoughts but integral parts of the planning and risk assessment process.
Transforming mandates into opportunities
Rather than viewing AI mandates as constraints, teams can reframe them as opportunities to uncover underlying pain points and explore innovative solutions. When someone requests automation for a tedious task, it might reveal a chance to make that work more valuable or eliminate it entirely. Requests to automate repetitive processes could lead to process optimisation, creating more time for value-generating activities.
If teams feel overwhelmed and want to offload work to AI systems, this might indicate a need to refocus on what truly matters and eliminate low-value activities. For example, automated transcription requests from deaf community members or non-native speakers could spark broader initiatives to make communication more inclusive through reliable translations and asynchronous engagement options.
AI mandates can also expand teams’ understanding of technical possibilities, leading to better exploration of solutions for customer problems. They provide opportunities to bring customer-centricity back to technology-driven conversations by asking what issues are being solved and what value is being created for customers and the business.
Importantly, these conversations reveal that existing data science and machine learning implementations might solve the same problems with significantly fewer ethical risks and lower resource requirements than large language models.
Building sustainable AI practices
The key to successful AI initiatives lies in treating them like any other product development challenge. This means spending time in the problem space before jumping to solutions, applying rigorous feasibility and viability assessments and maintaining focus on customer value and business outcomes.
Organisations that take this approach will find themselves better positioned to make informed decisions about when and how to use AI technologies. They’ll build more sustainable AI practices that align with their values and long-term business objectives, rather than chasing the latest technological trends without a clear purpose.
Those who succeed with generative AI in product teams will not be those who adopt it fastest or most broadly, but those who apply it most thoughtfully and usefully. By maintaining their commitment to good AI product development practices and customer-centric product strategy, they can navigate strategic AI adoption in ways that create genuine value rather than just checking boxes.
The future belongs not to organisations that mandate AI use, but to those that empower their teams to solve real problems with the best available tools – whether those happen to include AI or not.
More Ideas
our thoughts
Mastering Model Context Protocol (MCP): how to give your AI Code Assistant tools to use
Posted by Davin Ryan . Jun 30.25
We are regularly reminded that large language models (LLMs) will revolutionise how we work, automate complex tasks and enhance productivity across industries.
> Readour thoughts
Platform engineering principles that actually work for teams
Posted by Reuben Dunn . Jun 24.25
Lately, I’ve been thinking about the concept of principles. I coach basketball based on principles rather than set plays and, like basketball, platform engineering represents complex adaptive systems.
> Readour thoughts
Questions leaders must ask during AI implementation
Posted by Martin Kearns . Jun 18.25
We are regularly reminded by technology experts that AI will transform business operations, reduce costs and create competitive advantages. We agree these opportunities exist, but this blog isn’t just another article promoting AI adoption.
> Readour thoughts
Evaluating strategic AI bets
Posted by Gareth Evans . Jun 12.25
Organisations are increasingly making strategic product bets on Artificial Intelligence, representing both tremendous opportunity and significant risk.
> Readour thoughts
Do YAGNI, KISS and DRY always provide better engineering and product development?
Posted by Davin Ryan . May 19.25
We are regularly reminded by content creators and experts that YAGNI, KISS and DRY reduce ‘dumb work’, are ‘need to know’ and ‘key software design principles’ to get the best out of your software engineering teams leading to better business outcomes.
> Read