OUR THOUGHTSAI

Teaching AI in an age of uncertainty: the ethical dilemma of educational responsibility

Posted by Daniel Walters, Don Smith, Gareth Evans . Jul 11.25

As educators in the technology space, we’ve always understood the multiplier effect of our work. When you teach a developer, you’re not just impacting one person – you’re influencing every line of code they’ll write, every system they’ll build and every user they’ll ultimately serve.

This amplification effect has traditionally been a source of motivation. But in the age of AI, it’s becoming a source of existential concern.

The question that keeps many of us awake at night isn’t whether AI will transform software development – that’s already happening. The question is whether we can trust the leaders driving this transformation and whether our role in accelerating adoption makes us complicit in outcomes we can’t foresee.

The leadership trust gap

Looking around at the current landscape, many of us struggle with a fundamental question: do we trust the people making the biggest calls about AI’s future? The business leaders, politicians and tech executives steering this ship… Are they the right people to be making decisions that will reshape entire industries and millions of careers?

This isn’t just academic scepticism. It’s a practical concern that affects how we approach our work every day. When you’re teaching someone to use AI tools, you’re not just giving them a skill – you’re potentially putting more power into a system whose ultimate direction remains uncertain.

Learning from social media’s unintended consequences

We’ve been here before, though perhaps not at this scale. Social media was supposed to connect people, help maintain relationships across distances and democratise information sharing. The intentions were largely positive, even noble. Yet the mental health crisis, the polarisation, the manipulation of democratic processes… none of these were in the original vision.

The analogy isn’t perfect, but it’s instructive. Even well-intentioned technology can have devastating unintended consequences. And AI feels orders of magnitude more powerful than social media ever was.

The pragmatic reality

Despite these concerns, the pragmatic reality is clear: AI adoption is happening whether we participate or not. The question isn’t whether AI will transform software development – it’s whether that transformation will be guided by people who think deeply about the implications.

If ethically-minded educators step back from AI because of these concerns, we don’t prevent the transformation. We just ensure it happens without voices advocating for responsible practices, human-centred approaches and careful consideration of consequences.

Our specific focus: AI-assisted development

This is why we’ve chosen to focus specifically on AI-assisted software development rather than AI in a broad sense. We’re not trying to accelerate the development of artificial general intelligence or contribute to the race for ever-more-powerful models. We’re helping people use existing AI tools more effectively in a specific domain.

The reality is that most of the people we teach will use these skills for mundane applications – building business software, automating routine tasks, creating slightly better user interfaces. The world-changing applications are happening at a different level entirely.

The quality argument

There’s another angle that makes this work feel more justified: the potential for dramatically improved software quality. Anyone who’s worked in software development knows the human cost of bugs, technical debt and poorly written code. These issues don’t just frustrate users – they make developers’ lives miserable.

If AI can help us write better code, catch more bugs and reduce the friction that often makes software development feel like a slog, that’s a significant positive impact. The improvement in both developer experience and end-user experience could justify some of the risks we’re taking.

The time and space dividend

Perhaps most importantly, we’re not advocating for AI to help people work faster or cram more features into already-overloaded development cycles. Instead, we’re promoting the idea that AI should free up time and mental space for more rigorous practices, better design thinking and higher-quality outcomes.

This framing feels crucial. If AI adoption is inevitable, we want to influence how people think about using that newfound efficiency. More time for learning, more space for quality, more opportunity for the kinds of thoughtful development practices that get squeezed out in traditional high-pressure environments.

Living with uncertainty

The truth is that we don’t know how this will all play out. We can’t see around the corner any better than anyone else. What we can do is approach our work with intention, acknowledge the ethical complexities and try to model the kind of thoughtful engagement with AI that we hope to see more broadly.

We’re not trying to solve all of AI’s potential problems through education. We’re trying to ensure that when people encounter these tools, they do so with a framework for thinking about responsibility, quality and human impact.

The path forward

Teaching AI in this moment requires holding multiple truths simultaneously:

  • This technology is powerful and potentially transformative
  • The people currently leading its development may not be the ideal stewards
  • Stepping away from education doesn’t prevent problematic outcomes
  • We have a responsibility to model thoughtful, ethical engagement
  • The specific applications we’re teaching are relatively bounded and practical
  • Quality improvements in software development have real human benefits

None of these truths resolve the fundamental tension, but they help us navigate it with intention rather than just hope.

A call for continued vigilance

As we move forward with AI education, it is essential to maintain this level of ethical reflection. We need to continue asking tough questions about outcomes, push for human-centred approaches and model the kind of thoughtful engagement we want to see in the field.

Any single educational programme or course won’t determine the future of AI. But if we can contribute to a culture of responsible AI use, if we can help people think more carefully about the implications of their tools and if we can demonstrate that speed and efficiency don’t have to come at the cost of quality and ethics, that feels like work worth doing.

Even in an age of uncertainty.

This reflection emerged from an ongoing conversation about the ethics of AI education. We believe these discussions need to be public, ongoing and honest about the complexities involved. The future of AI will be shaped not just by the technology itself, but by how we collectively choose to engage with it.

Daniel Walters

Daniel Walters

As Principal Consultant at HYPR, Daniel supports our clients in establishing and deploying their tech strategies by leveraging his experience in CTO, CIO and CPTO positions.

More Ideas

our thoughts

Product ops for AI initiatives: moving beyond AI mandates

Posted by Bianca Grizhar . Jul 07.25

Organisations are increasingly considering incorporating AI into products and services, driven by the desire to stay ahead of technology and not be left behind their competitors.

> Read

our thoughts

Mastering Model Context Protocol (MCP): how to give your AI Code Assistant tools to use

Posted by Davin Ryan . Jun 30.25

We are regularly reminded that large language models (LLMs) will revolutionise how we work, automate complex tasks and enhance productivity across industries.

> Read

our thoughts

Platform engineering principles that actually work for teams

Posted by Reuben Dunn . Jun 24.25

Lately, I’ve been thinking about the concept of principles. I coach basketball based on principles rather than set plays and, like basketball, platform engineering represents complex adaptive systems.

> Read

our thoughts

Questions leaders must ask during AI implementation

Posted by Martin Kearns . Jun 18.25

We are regularly reminded by technology experts that AI will transform business operations, reduce costs and create competitive advantages. We agree these opportunities exist, but this blog isn’t just another article promoting AI adoption.

> Read

our thoughts

Evaluating strategic AI bets

Posted by Gareth Evans . Jun 12.25

Organisations are increasingly making strategic product bets on Artificial Intelligence, representing both tremendous opportunity and significant risk.

> Read