The promise and peril of agentic AI in government

AI is set to redefine how governments deliver for their citizens. For the Australian Public Sector, this presents both an immense opportunity and a serious responsibility, writes Nam Lam.

In a recent keynote, Minister for Industry and Innovation, Senator Tim Ayres, underlined that artificial intelligence and the digital economy are now central to Australia’s national productivity ambitions, noting:

AI adoption is not a future task for firms and government – it is well and truly underway. The Australian challenge is to lean in to adopt AI to lift productivity and living standards, deliver investment in infrastructure and capability and protect our security.

Just as the cloud transformed enterprise IT, agentic AI is set to redefine how governments deliver for their citizens. But this time, the wave is moving faster and demands stronger governance from the outset. For the Australian Public Sector, this presents both an immense opportunity and a serious responsibility.

Agentic AI systems act autonomously towards defined goals. They perceive, decide, and execute without waiting for explicit human prompts. For the APS, the potential benefits are enormous. These technologies can drive greater efficiency by automating high-volume administrative tasks such as case management and permit processing. They can enable round-the-clock citizen support through always-available virtual agents. They can scale effortlessly in response to peaks in demand  such as during crises or major policy changes – and bring sharper, more responsive decision-making through timely, data-driven insights.

But with great power comes real risk.

We’ve seen what happens when IT systems act without oversight, and the consequences can be devastating. Consider the UK Post Office’s Horizon system scandal which ruined the lives of many sub postmasters who were prosecuted for errors made by the IT system, due to blind faith in the technology. The underlying problem was not the technical error itself but a deep failure in oversight and responsibility. Meanwhile at home, the Australian Government was the second most-breached sector in 2024, reporting 63 data breaches from January to June – with agencies taking the longest time (87 per cent) to identify incidents more than 30 days after they occurred. This demonstrates the real-world consequences when oversight fails.

As government agencies embrace agentic AI, past lessons must guide the way. While initiatives like GovAI have established general AI policies with “golden rules” for safe AI use, these frameworks don’t specifically address the unique risks of autonomous agents that can make independent decisions across government systems.

Without clear governance, transparency and control, the potential exponential benefits of agentic AI can quickly turn into public trust disasters.

Quantifiable risk

To bring this risk to life, our recent agentic AI research found that 72 per cent of security professionals believe AI agents are riskier than traditional machine identities. These agents have already triggered real-world incidents: 80 per cent of organisations report that AI agents have taken unintended actions, including unauthorised access to systems (39 per cent) and the sharing of sensitive data (33 per cent). Alarmingly, nearly one in four say AI agents have been tricked into revealing access credentials. Yet despite these risks, only 44 per cent of organisations report having governance policies in place to manage these agents.

To mitigate these types of risks, there are a number of measures agencies can take. AI transformation projects should start small and keep humans in the loop as a fallback. Comprehensive guardrails and operational boundaries must be applied, with continuous monitoring, red teaming, and adversarial testing. However, unquestionably, the most important thing agencies can do is control what agents can access. Like human identities, AI agents require governance.

Lead with identity governance

The research reveals a potential governance crisis. While 92 per cent of organisations recognise that governing AI agents is critical to enterprise security, the reality falls far short. A staggering 64 per cent of AI agents require multiple identities to access necessary systems and data, yet only 62 per cent of organisations use identity security solutions to manage these complex access patterns.

Agencies must take deliberate steps to discover every agent in their environment, assign clear ownership, enforce least privilege access, and conduct regular access reviews and audits. Agentic AI should be treated like any other identity, be it human, machine, or third party. Governance must be built in from day one.

Identity security today is no longer just about provisioning and deprovisioning. It’s about understanding, in real time, which identities – human or AI – exist, what they can access, and how that access changes. Without this visibility, every AI deployment is a leap in the dark.

With 98 per cent of organisations planning to expand AI agent deployments within 12 months, identity governance becomes the critical control layer that can scale with autonomous AI adoption whilst providing the visibility that legal, compliance, and executive teams need.

AI agents are moving beyond the hype to operational. It’s time every government agency asked: who’s governing my AI agents?

With trusted identity governance, Australia’s public sector can lead the world in harnessing it for smarter, safer citizen outcomes – or risk eroded public trust, unchecked exposure, and irreversible policy failures.

Nam Lam, managing director ANZ, SailPoint

Like this news?

Leave a Reply

Your email address will not be published.