
OUR THOUGHTSAI
Balancing the agentic innovation opportunity with security
Posted by Tom Lahoud, Sean Spires, The HYPR Team . Mar 30.26
The Artificial Intelligence landscape is evolving at breakneck speed with organisations across Australia and New Zealand grappling with a fundamental challenge: how to harness the innovative potential of AI agents while maintaining robust security standards.
This tension between innovation and security represents a pressing concern for engineering leaders and CIOs today.
The current state of AI adoption
Recent observations from industry events and customer interactions reveal a pattern. When polling engineering and product teams, approximately 80 percent indicate they are actively experimenting with AI, whether incorporating it into existing products or developing entirely new AI-driven experiences.
However, when asked about production deployments, only a small fraction keep their hands raised. This dramatic drop-off highlights the security concerns that are preventing organisations from shipping AI-enabled features to market.
The reluctance to move from prototype to production stems largely from the unpredictable nature of AI agents.
Unlike traditional applications where user actions can be anticipated and contained within defined parameters, AI agents operate non-deterministically. They can go off script, spawn additional agents and make inferences based on perceived user intentions. This inherent unpredictability creates anxiety among technology leaders who fear ending up in headlines for the wrong reasons.
Understanding AI agents as a new identity type
A fundamental shift in thinking is required when approaching AI agent security. These agents represent an entirely new identity type that cannot be secured using traditional application security models. The distinction between chatbots and agents is crucial here: while chatbots primarily respond to queries, agents take action on behalf of users.
This capability to execute actions introduces complex security considerations that must be addressed systematically. AI agents need to authenticate users, inherit appropriate permissions and access sensitive data while maintaining the principle of least privilege. The challenge lies in ensuring that agents can only access what the controlling user is authorised to access, without being over-provisioned with permissions.
Three cornerstones of AI agent security
Securing AI agents effectively requires attention to three critical areas. The first cornerstone involves proper authentication, ensuring that agents can definitively identify the users they represent while maintaining a distinct identity as an AI system acting on behalf of that user.
The second cornerstone focuses on authorisation. Rather than building complex permission systems in-house, organisations benefit from leveraging established identity and access management services. This ensures that user permissions flow appropriately to the agents they control, maintaining consistent security policies across all interactions.
The third cornerstone addresses data security. AI agents often require access to sensitive information to perform their functions effectively. Healthcare applications, for instance, may need to access patient records, while financial services might require payment information. Individually harmless data points can become powerful and potentially dangerous when combined by AI systems, making robust data governance essential.
Emerging attack vectors and vulnerabilities
The non-deterministic nature of AI agents introduces novel attack vectors that security teams must understand and mitigate. Unconstrained access represents a significant vulnerability as agents can probe and access internal systems they were never intended to reach.
The McDonald’s breach in the United States provides a sobering example where a chatbot processing employment applications discovered and exploited internal APIs, ultimately exposing sensitive candidate information.
Privilege escalation presents another critical concern. Agents can potentially trick systems into granting access to data that the controlling user should not be able to see. This attack vector is particularly concerning because identity-based attacks already represent the primary cybersecurity threat vector and compromising an agent’s identity could enable attacks against multiple users rather than just one individual.
Data leakage through reasoning represents a more subtle, but equally dangerous vulnerability. AI agents can inadvertently reveal sensitive information by combining seemingly innocuous data points.
For example, agents might piece together non-sensitive documentation to deduce confidential information such as pending mergers or acquisitions. This capability essentially democratises mosaic theory techniques that were previously limited to specialised investigators.
Practical steps for secure AI implementation
Organisations looking to implement AI agents securely should begin with data governance rather than AI governance specifically. This foundational approach involves cataloguing all data that AI systems might access, classifying it by sensitivity level and establishing clear policies around access and usage.
Building a centralised authorisation framework represents the second critical step. Rather than treating each AI feature as an isolated security problem, organisations should implement consistent policy engines that manage permissions across all applications, APIs and AI systems.
This approach ensures that when agents need to access databases or call APIs, they must pass through standardised security controls.
Human-in-the-loop approval mechanisms become essential for high-stakes actions. Agents should prompt users for explicit approval before executing sensitive or irreversible actions. This might involve stepping up authentication requirements to verify user identity before proceeding with critical tasks.
The innovation imperative
Despite these security challenges, the pressure to innovate with AI remains intense. Users increasingly expect to interact with services through AI interfaces, with many now searching through Large Language Models rather than traditional search engines.
Real estate platforms, legal technology companies and accounting firms are already deploying AI assistants that can not only provide information, but execute actions on behalf of users.
The competitive landscape demands that organisations find ways to meet users where they are, which increasingly means providing AI-enabled experiences. Companies that fail to adapt risk losing relevance as users migrate to platforms that offer more intuitive, AI-powered interactions.
Building for the future
The key to successful AI implementation lies in recognising that existing security principles remain valid even as architectural patterns evolve. The transition from monolithic applications to microservices required new approaches while maintaining core security concepts – and the same applies to AI agent development.
Organisations should focus their engineering efforts on building differentiated product features rather than recreating identity and access management systems. Leveraging established authentication and authorisation services allows teams to concentrate on delivering value to users while maintaining security standards that have been proven at scale.
The path forward requires balancing innovation with security, ensuring that the exciting possibilities of AI agents can be realised without compromising user trust or organisational integrity.
By treating AI agents as the new identity type they represent and implementing appropriate security frameworks from the outset, organisations can confidently deploy AI-enabled features that delight users while protecting sensitive data and maintaining compliance requirements.
Success in this new landscape belongs to organisations that can navigate the tension between moving quickly and moving securely, ultimately delivering innovative AI experiences that users can trust.
More
Ideas
our thoughts
Why software teams are shrinking to three people
Posted by Gareth Evans . Feb 23.26
Software teams are shrinking. Not because organisations are cutting headcount for its own sake but because AI agents are absorbing work that used to require five, seven or ten people to negotiate interfaces and split stories. The emerging unit is three humans and a constellation of agents.
> Readour thoughts
Data and analytics: are you asking the right questions?
Posted by Brian Lambert . Feb 09.26
The conversation around Artificial Intelligence in enterprise settings often focuses on the technology itself – the models, capabilities, impressive demonstrations. However, the real question organisations should be asking is not whether AI is powerful, but whether they are prepared to harness that power effectively.
> Readour thoughts
The future of voice intelligence
Posted by Sam Ziegler . Feb 01.26
The intersection of Artificial Intelligence and voice technology has opened fascinating possibilities for understanding human characteristics through audio alone. When we hear someone speak on the phone, we naturally form impressions about their age, gender and other attributes based solely on their voice.
> Readour thoughts
Is AI the golden age for technologists?
Posted by The HYPR Team . Oct 27.25
The technology landscape is experiencing a profound change that extends beyond the capabilities of Artificial Intelligence tools. We’re witnessing a fundamental shift in how people engage with technology, build software and think about their careers in the digital space. This shift raises a compelling question... Are we entering a golden age for technologists?
> Readour thoughts
The IT fossil record: layers of architectural evolution
Posted by The HYPR Team . Oct 20.25
The metaphor of an IT fossil record captures something interesting about how architectural thinking has evolved over decades. Like geological strata, each architectural era has left its mark, with some layers proving more durable than others. The question remains whether we’ve reached bedrock or continue to build on shifting sands.
> Read