
OUR THOUGHTSTechnology
Evaluating strategic AI bets
Posted by Gareth Evans . Jun 12.25
Organisations are increasingly making strategic product bets on Artificial Intelligence, representing both tremendous opportunity and significant risk.
Many AI investments suffer from a disconnect between strategic intent and execution reality. Pressure from investors and analysts asking about AI initiatives can lead to rushed implementations without proper discovery.
How do you ensure you’re making the right strategic bets with AI? The answer lies in robust discovery to ensure you’re providing real value to customers, not just chasing technology trends.
The challenge of AI investments
AI bets amplify the promise and peril of technology investments. The potential upside can be transformative, but the downside risks – wasted investment, damaged reputation, ethical concerns – can be substantial. Robust discovery is particularly important for AI initiatives for several key reasons.
An AI bet might involve novel capabilities that haven’t been fully tested at scale, technical approaches with unclear integration pathways, user experiences that customers haven’t encountered before, value propositions that haven’t been validated in the market and ethical and organisational implications that aren’t fully understood. This level of uncertainty demands a risk-mitigating approach to strategic planning and product development.
Why discovery is critical for AI bets
Effective discovery de-risks strategic bets by exploring, validating and refining ideas before committing significant resources to building them. This is particularly important with AI bets because:
- AI innovation presents a higher level of uncertainty than well-understood business use cases, which means a higher level of risk. Discovery provides structured methods to uncover risks and blind spots before they become costly missteps
- AI solutions often push the boundaries of what’s technically possible. Discovery activities like technical spikes and experiments help determine if your vision is technically feasible by evaluating data quality and availability, system performance and scalability and your ability to integrate and extract the data you need
- Even the most advanced AI capabilities are worthless if users don’t understand, trust or find value in them. Discovery helps validate that AI capabilities solve real problems in ways people find intuitive and non-threatening to use
- AI initiatives often require substantial investment before delivering returns. Experiments help validate the business case incrementally, reducing the financial risk of big bang commitments not yet proven successful
Discovery tips for AI initiatives
So, what are the most important areas to focus on during the discovery phase of AI initiatives?
Define the outcome, not the solution
Start by clearly articulating the problem you’re trying to solve or the opportunity you’re pursuing, rather than specifying the AI/technology solution. For example, instead of saying ‘We need to implement a recommendation engine using collaborative filtering’, try ‘We need to increase customer engagement by making it quicker and easier for customers to discover products that match their interests’. This outcome-focused approach keeps you open to different solutions as you learn.
Assess data drag
Technical feasibility around data readiness is a crucial part of discovery for AI initiatives. Many organisations don’t realise how poor their data quality actually is, which can pose a significant technical challenge when implementing AI initiatives and needs to be explored before committing to an expensive build.
Data drag, as outlined by Dr Brian Lambert, represents a critical barrier that can slow or completely halt AI initiatives. This phenomenon encompasses the inefficiencies and obstacles in how data is collected, managed and utilised across an organisation. Data drag manifests through several interconnected challenges that create friction when implementing AI initiatives.
Key manifestations of data drag include:
- Data silos and fragmented systems: When departments operate in isolation, information cannot flow seamlessly across the organisation. This creates data islands that prevent AI initiatives from accessing and joining the datasets they require
- Compromised data quality: Inconsistent, incomplete or unreliable data undermines the foundation upon which AI initiatives are built. Poor data quality directly impacts the accuracy and trustworthiness of AI-driven insights and decision-making
- Legacy technology: Outdated systems struggle to handle the volume, velocity and variety of data that modern AI initiatives demand. These technological limitations create significant barriers to data accessibility and processing
- Inadequate data governance: Weak or unclear data policies lead to compliance risks, security vulnerabilities and untrustworthy insights. Without proper governance, organisations cannot find the data they need and cannot ensure data integrity
- Digital readiness gaps: Some organisations still operate with partially digitised workflows, making it impossible to apply AI to processes in non-digital formats
Many organisations have operated with sub-optimal data practices for years without immediate consequences. AI initiatives immediately expose these underlying weaknesses and directly impact the feasibility and viability of the investment. What once seemed like manageable inefficiencies become critical roadblocks that threaten organisational competitiveness and the ability to innovate.
Explore change barriers
Beyond technical challenges, AI investments face substantial human resistance. There’s often deep fear about what AI means for individual roles and organisational structure. When people have built entire careers around specific tasks and processes that AI can now at least partly automate, the natural reaction is anxiety and resistance. When creating AI products and services, the rate of adoption must be considered, especially where resistance may cause some customers, internal staff or stakeholders anxiety or fear unless education and support are available.
Similarly, employees might experiment with consumer AI tools but struggle to translate those experiences into enterprise-ready solutions.
There’s a substantial gap between the consumer experience of AI tools and the reality of enterprise implementation. The ease with which individuals can use generative AI for personal tasks often creates unrealistic expectations about enterprise adoption.
This disconnect typically manifests as unrealistic expectations about implementation timelines, resources required and outcomes expected. Organisations frequently underestimate the complexity of integrating AI into existing systems and workflows, particularly when those workflows aren’t fully digitised or optimised yet.
Map assumptions and risks
Systematically identify the key assumptions and risks in your AI bet. These typically fall into four categories:
- Desirability concerns whether users will want this AI capability and if they’ll trust it
- Viability questions whether this AI solution will deliver business value that justifies the investment
- Feasibility examines whether we can build this AI solution with the technology, data and talent available
- Usability considers if users can effectively interact with and benefit from this AI capability. You should prioritise assumptions based on their uncertainty and importance
Explore market and business fit
Does the AI solution align with customer needs and the broader business strategy? What is the expected ROI and how will the solution scale over time? Are there competitive alternatives and how does the proposed solution differentiate itself?
Design experiments
For each high-priority assumption, design experiments to validate or invalidate it with the minimum necessary investment. These can include prototype testing, where you create lightweight simulations of the AI experience, Wizard of Oz testing, where you manually simulate AI capabilities to test user responses, data validation to analyse existing data and validate key assumptions or technical spikes to build minimal implementations to prove technical feasibility.
Learn and adapt
As experimental results come in, be prepared to double down on promising directions, pivot when assumptions are invalidated and abandon paths that don’t show sufficient promise. For many organisations, this represents a cultural change in leadership and ways of working. Leaders must demonstrate these behaviours to support teams.
Discovery for an AI customer service assistant
Let’s examine how discovery might help a company considering a significant investment in an AI customer service assistant.
The bet is to implement an AI assistant to handle 70% of customer service inquiries, reducing costs while maintaining satisfaction. Initial assumptions include that customers will accept and trust AI for service inquiries, existing data is sufficient to train an effective model, the AI can accurately identify when to escalate to humans, the solution will integrate with existing systems and the cost savings will justify the investment.
Instead of immediately building the full solution, the team might engage in several discovery activities. They could create a simple prototype of the conversation flow and test with customers to gauge acceptance and identify pain points. They might analyse existing customer service logs to determine if they contain the information needed to train effective models.
The team could have human agents simulate AI responses following guidelines to test the proposed solution. They might build a minimal version of the AI-based product to test accuracy on a sample of common inquiries and evaluate integration points with existing systems to identify potential challenges.
These discovery activities might reveal that customers accept AI for simple inquiries but want human options for complex issues. They might find that training data is missing context for certain inquiry types, the system struggles to identify when to escalate complex issues and integration with the legacy CRM will require substantial custom development.
Based on these learnings, the team might redefine the scope to target 40% automation initially, focusing on well-defined inquiry types. They could initiate a data enhancement project for specific inquiry categories, develop clearer escalation criteria, ensure human backup is seamless and build custom integration adapters for the legacy CRM.
By investing in discovery before and during implementation, you can dramatically increase the chances of success while reducing wasted effort and resources.
Common pitfalls to avoid
When implementing continuous discovery for AI initiatives, there are several common pitfalls to avoid:
- Don’t mistake demos for validation. For example, vendor demos often show best-case scenarios, but your discovery should test real-world conditions
- Don’t overinvest in prototypes. Keep experiments focused on learning, not building perfect solutions. Time-boxing helps balance time with the value of information discovered
- Don’t ignore qualitative feedback. Numbers alone won’t tell you everything about how users perceive AI experiences. User experience needs to detect the signal in the noise, which comes from data, experience and intuition
- Don’t skip technical feasibility. The most exciting AI capabilities are meaningless if they can’t be implemented reliably
- Don’t try to validate everything at once. Prioritise assumptions based on risk and uncertainty
- Don’t spend too long in discovery mode. Be pragmatic and timebox discovery based on the risk and value of information
Discovery in your organisation
To make discovery effective for your AI bets, there are several key considerations:
- Create dedicated discovery capacity rather than trying to squeeze them in alongside delivery
- Develop a culture of experimentation that rewards learning, not just delivery and makes it safe to invalidate assumptions
- Build discovery skills by training your team in customer research, rapid prototyping and experiment design
- Establish clear decision criteria by defining upfront what evidence will guide your decisions to continue, pivot or stop
- Share learnings broadly to ensure insights from discovery activities are accessible across teams to prevent repeated mistakes
From uncertainty to confidence
AI bets involve far more uncertainty than traditional technology investments. Discovery provides a structured approach to navigating this uncertainty, helping you build confidence in your direction through evidence rather than assumptions.
By making time for robust discovery, you can validate the most critical aspects of your AI bet before making a significant investment. You’ll identify and address potential problems early, when changes are less costly. This approach helps you build solutions that actually deliver value to users and the business and adapt your approach as you learn, rather than sticking rigidly to initial plans.
In the complex domain of AI, the organisations that thrive will be those that can systematically explore possibilities, learn rapidly and adapt their approach based on evidence. Continuous discovery provides the framework to do exactly that.
Remember, the goal isn’t to eliminate all uncertainty. That’s impossible. Instead, the goal is to reduce uncertainty enough to make more confident bets while maintaining the agility to adapt as you learn more through implementation and customer feedback.
More Ideas
our thoughts
Do YAGNI, KISS and DRY always provide better engineering and product development?
Posted by Davin Ryan . May 19.25
We are regularly reminded by content creators and experts that YAGNI, KISS and DRY reduce ‘dumb work’, are ‘need to know’ and ‘key software design principles’ to get the best out of your software engineering teams leading to better business outcomes.
> Readour thoughts
Empowering developers with Containerised Development Workflows
Posted by Blair Jacobs . May 12.25
When HYPR works with a customer on their software systems, our mission is always to deliver the most value possible. In most organisations, we observe a range of challenges that developers face that make it difficult to provide value at pace.
> Readour thoughts
How software companies stay relevant and responsive during organisational transformation
Posted by Daniel Walters . May 01.25
There often comes a time when business leaders look at how they are competing in the marketplace or how things are operating within and start to form a view that something needs to change. Even more often, when there is a new leader, they will form the view that something needs to change. These changes are usually structural and disruptive.
> Readour thoughts
Lifting the lid: understanding what makes development teams thrive
Posted by Tony Luisi . Apr 21.25
Development teams are the engine of innovation in today’s digital business environment. While organisations increasingly demand more features from their development teams, few potentially understand what happens beneath the surface.
> Readour thoughts
The AI coding partner: a generational shift in product engineering
Posted by Gareth Evans . Apr 14.25
We’ve recently witnessed a fundamental change in how software is engineered. Large Language Models (LLMs) are reshaping the landscape of product development in ways that extend beyond code completion and assistance. This shift isn’t just about productivity gains – it’s a new way of conceptualising, designing and building software products.
> Read