
OUR THOUGHTSAI
Don’t outsource your critical thinking: human-centred design in the age of AI
Posted by The HYPR Team, Dr Dani Chesson . Sep 01.25
The relationship between human intelligence and Artificial Intelligence represents one of the most critical challenges facing organisations today. As AI systems become increasingly sophisticated and ubiquitous, the temptation to outsource critical and creative thinking grows stronger.
However, this approach fundamentally misunderstands the nature of AI and the irreplaceable value of human cognition in design and decision-making processes.
The intersection of Design Thinking and behavioural science
Design Thinking has long focused on understanding human needs, wants and pain points as the foundation for problem-solving. Every meaningful problem traces back to an unmet need or want, creating pain points that design work seeks to alleviate.
This human-centred approach has helped us navigate complex, chaotic and seemingly impossible challenges and it becomes even more crucial when working with AI systems which require careful consideration of how humans actually behave versus how we expect them to behave.
Behavioural science provides the missing link in this equation. Human brains operate through sophisticated networks of shortcuts and biases that serve to be essential to our survival. These cognitive patterns, often mischaracterised as purely negative, actually represent millions of years of evolutionary optimisation. The challenge lies not in eliminating these biases but in understanding how they interact with AI systems to either enhance or undermine our decision-making capabilities.
The double bias problem
When humans interact with AI, we encounter what can be termed the ‘double bias problem’. AI systems are trained on human-generated data, inheriting the biases embedded in that information. Simultaneously, humans approach these systems with their own cognitive biases, creating a compound effect that can amplify rather than correct human error.
This becomes particularly problematic because humans tend to place greater trust in technology than in other humans. While we readily acknowledge that humans are fallible, we often assume that technology is inherently more accurate and reliable. This trust differential creates dangerous blind spots, especially when AI systems produce confident-sounding, but incorrect outputs.
Practical strategies for human-AI collaboration
The solution requires deliberate strategies that leverage AI capabilities while maintaining human oversight and critical thinking. One effective approach involves persona-based prompting, where AI systems are instructed to assume different professional roles and perspectives. For example, by asking an AI to review ideas from the standpoint of a risk manager, legal expert or industry specialist, we can surface potential issues and alternative viewpoints that might otherwise remain hidden.
This technique works because it forces the AI system to draw from different aspects of its training data and the human user to consider multiple perspectives. However, the human must still evaluate these responses based on their understanding of what each professional role would actually prioritise and how they would communicate their concerns.
The design-AI-brain science triangle
Understanding the intersection of Design Thinking, AI capabilities and human cognition creates a framework for more effective collaboration. Design Thinking provides the structure for understanding problems and evaluating solutions. AI offers rapid processing of large amounts of information and can generate ideas without the self-censorship or fear of judgment that often limits humans. Behavioural science helps us understand how to present ideas, organise information and shape solutions to make decision-making and adoption easier.
Attempts to solve problems often fail because we fail to realise that problem-solving is highly contextual. The design-AI-brain science triangle offers a way to understand the context to create tailored solutions at an accelerated pace. However, it is only effective when the strengths of AI technologies are blended with human strengths.
Organisational implementation considerations
The integration of AI into decision-making processes requires careful consideration of human psychological responses to change. From a behavioural science perspective, change represents a threat to the brain’s preference for predictability and stability. This threat response occurs regardless of the potential benefits of the change. Anything that may change the status quo is viewed by the brain as a threat and will respond accordingly. So the job of leaders is to help minimise the threat.
One way organisations can minimise the threat response is by encouraging teams to give it a try. Encourage teams to play around with how they might use AI in their everyday work and ask them to share what they are learning. Encouraging employees to experiment with AI tools and contribute ideas about implementation creates ownership and reduces threat perception. As any change expert will tell you, people will adopt a change much faster when they feel they are part of creating the change. This approach also generates valuable insights about how to practically apply AI within business processes that might not be apparent to leadership teams.
Leaders also need to emphasise and reinforce the value of human capabilities. This needs to be done not just through words, but through actions. There is a long-held tendency in organisations to believe that technology will be the saviour, the silver bullet that will alleviate all that ails them.
Over the decades, this view has devalued human capabilities, often being labelled as an ‘expense’ while technology is viewed as an ‘investment’. The error in this view is that technology alone does not provide organisations a competitive advantage, it never has and it likely never will. The competitive advantage comes from combining human strengths with technology. Such blending will become increasingly more important as the use of AI becomes more prevalent.
Educational and workforce implications
The widespread adoption of AI not only has significant implications for leaders and organisations but also for educational institutions that prepare humans for the workforce.
Traditional entry-level work that served as training grounds for new graduates may increasingly be handled by AI. This shift requires reconsidering what capabilities new workers need and how educational institutions prepare students for AI-integrated workplaces. It is likely that new entrants into the workforce will need to have the critical and strategic thinking of a mid-career employee in an AI-integrated world. It is also likely that new entrants to the workforce will be expected to work more autonomously, taking on greater decision-making and influencing responsibilities at an earlier career stage than previously expected.
What does all this mean for how we educate, train and prepare the next generation workforce? Are current educational institutions prepared to make these shifts?
The ethical dimension
The ethical implications of AI adoption are unfolding and won’t likely be fully understood for decades to come. However, we have an opportunity to learn from how social media technologies, for example, have developed. When the use of mainstream social media began, there was little understanding of its full potential or ethical implications. Today, we know its impact on adolescents and developing brains, the dopamine response to dark patterns and its ability to influence everything from buying behaviour to policies and politics. While we may have dropped the ball on the ethical use of social media, we can and must do better with AI.
One ethical consideration is the impact AI will have on human livelihoods. To a large degree, many organisations currently view AI as a replacement for their human workforce. What is the ethical responsibility of organisations when AI replaces jobs? This isn’t just an altruistic or ethical question but also a practical one. When 60% of people are unemployed, who will be your customers?
Another ethical consideration is what is our collective responsibility to ensure that AI exists to enhance the human experience, not replace it? The purpose of all technology is to serve humans but, as history shows us, we can quickly become the servants of technology. Just look at the relationship we have with our smartphones and social media.
Another ethical consideration is the impact of AI use on the human brain. Emerging research from MIT suggests that an over-reliance on AI tools has the potential to decrease cognitive abilities. While this research is in the early days, it is important to discuss the ethical implications of using AI, especially in the educational context with young people. The dilemma will be the need to prepare young people to enter an AI-enabled world, while also creating a learning environment that allows their cognitive capabilities to develop.
Looking forward
As the adoption of AI continues and its capabilities expand, the need for developing human capabilities such as critical, strategic and creative thinking will become more crucial. Organisations that successfully navigate the integration of human and AI capabilities will gain a competitive advantage.
The successful companies will use AI to handle routine processing and generate initial ideas while reserving critical evaluation, creative synthesis and ethical judgment for human cognition.
Don’t outsource your intuition. Instead, learn to amplify it through thoughtful collaboration with AI. The future belongs to those who can effectively combine human insight with AI capability, creating solutions that neither could achieve alone.

The HYPR Team
HYPR is made up of a team of curious empaths with a mission that includes to teach and learn with the confidence to make a difference and create moments for others.
More
Ideas
our thoughts
Improving human AI connections
Posted by The HYPR Team . Aug 26.25
The conversation around Artificial Intelligence implementation often centres on efficiency gains and cost reduction, but this narrow focus misses an important opportunity.
> Readour thoughts
How AI can best augment creatives
Posted by The HYPR Team . Aug 20.25
The relationship between Artificial Intelligence and human creativity stands at a fascinating juncture. Rather than the dystopian narrative of replacement, a more nuanced story emerges when we examine how AI can genuinely augment creative professionals.
> Readour thoughts
Is AI hurting team dynamics?
Posted by The HYPR Team . Aug 04.25
During our recent HYPRLive discussion, we dove deep into a question that’s keeping many product leaders up at night… while AI tools are undeniably accelerating development cycles, what are the hidden costs to team dynamics and product quality?
> Readour thoughts
AI-powered software migration: how one team transformed a six-week challenge into a four-week success story
Posted by The HYPR Team . Jul 14.25
When Olly Brand, CTO of The Collecting Group, first attempted to upgrade their PHP codebase from version 7.3 to 8.3 in 2023, it seemed like an insurmountable challenge. Four developers worked for six weeks but ultimately had to abandon the effort due to competing business priorities.
> Readour thoughts
Teaching AI in an age of uncertainty: the ethical dilemma of educational responsibility
Posted by Daniel Walters . Jul 11.25
As educators in the technology space, we’ve always understood the multiplier effect of our work. When you teach a developer, you’re not just impacting one person – you’re influencing every line of code they’ll write, every system they’ll build and every user they’ll ultimately serve.
> Read