Skip to the main content.

6 min read

It's Too Late to Secure AI: Why Trans-Tasman Organisations Must Focus on Governance

It's Too Late to Secure AI: Why Trans-Tasman Organisations Must Focus on Governance
It's Too Late to Secure AI: Why Trans-Tasman Organisations Must Focus on Governance
14:07

When I read EY's recent findings that half of all organisations have been negatively impacted by AI security vulnerabilities, my first thought wasn't surprise. It was validation of what we've been telling businesses across Australia and New Zealand for months: the traditional approach to AI security has already failed.

Here's the uncomfortable truth: it's too late to secure AI the way we've secured everything else.

In recent months, I’ve engaged with a wide range of organisations on the realities of AI - specifically, the security gaps and the critical importance of AI governance. During several AI-focused webinars and direct conversations with executives, a common theme is emerging: the AI equivalent of Pandora’s Box has been opened, and the potential risks are both significant and immediate.

The Security Patchwork Is Breaking Down

The EY report reveals a stark reality. Only 14% of CEOs believe their AI systems adequately protect sensitive data. Meanwhile, organisations are deploying an average of 47 different security tools across their networks. This patchwork approach isn't just failing with AI, it's amplifying the problem.

The numbers tell an even more alarming story. Cybercriminals' breakout time has plummeted from roughly an hour in 2023 to just 18 minutes in mid-2025. Voice phishing attacks have surged 442% in the second half of 2024 alone. AI isn't just creating new vulnerabilities, it's weaponising existing ones at unprecedented speed.

But here's where most organisations get it wrong: they're trying to solve an AI governance problem with more security tools.

Why Traditional Security Can't Keep Pace

The fundamental issue is that AI operates at a velocity that traditional security controls simply cannot match. By the time you've identified a vulnerability, assessed the risk, and deployed a control, the AI landscape has already evolved. The threat has morphed. The use case has expanded.

Consider what's happening right now across Australia and New Zealand. Sixty-eight per cent of companies let employees develop or deploy AI agents without high-level approval. Only 60% issue any guidance for AI work. This isn't a security failure, it's a governance vacuum.

AI is fundamentally different from the technologies we've secured before. It learns. It adapts. It makes decisions. It creates outputs that didn't exist seconds earlier. You cannot secure something that is, by design, built to evolve beyond your parameters.

The Governance Imperative

This is where the conversation needs to shift from security to governance. Not because security doesn't matter, but because without governance, security becomes a game of perpetual catch-up that you're destined to lose.

Governance asks different questions:

  • Who has authority to deploy AI systems?
  • What data can AI access and under what conditions?
  • How do we ensure human oversight remains meaningful?
  • What ethical frameworks guide our AI development?
  • How do we maintain transparency in AI decision-making?

These aren't questions that additional security tools can answer. They require organisational frameworks, clear policies, and continuous management systems.

ISO 42001: The Framework Trans-Tasman Organisations Need

This is precisely why ISO/IEC 42001, the world's first international standard for Artificial Intelligence Management Systems (AIMS), represents the future of responsible AI deployment.

ISO 42001 provides a comprehensive framework covering AI governance structures, risk assessment specific to AI, data governance throughout the AI lifecycle, human oversight requirements, performance monitoring, and ethical transparency.

It's not about adding another security layer. It's about establishing the management system that makes AI security possible in the first place.

At Insicon Cyber, we're working with organisations across Australia and New Zealand to pursue ISO 42001 compliance in order to gain something more valuable than better security: they gain control. They move from reactive firefighting to proactive governance. They transform AI from a risk management nightmare into a governed, auditable business capability.

The Trans-Tasman Context

For businesses operating across Australia and New Zealand, the governance imperative is even more pressing. We're operating in regulatory environments that are rapidly evolving, albeit with different approaches and timelines.

Australia's Regulatory Landscape

Australian businesses face an increasingly complex regulatory environment. The Privacy Act amendments, the SOCI Act requirements, and emerging AI-specific regulations aren't going to wait for organisations to figure this out organically. The Essential Eight framework continues to set baseline security expectations, while industry-specific regulations add further complexity.

New Zealand's Approach

New Zealand has adopted what its government calls a "light-touch, proportionate and risk-based approach to AI regulation." The Privacy Act 2020, with its 13 Information Privacy Principles (IPPs), applies directly to AI tool usage. The Office of the Privacy Commissioner released specific guidance on AI in September 2023, emphasising that privacy compliance must be considered from the outset of any AI project.

New Zealand's approach includes some unique elements that organisations need to understand. The requirement to consider te ao Māori perspectives on privacy reflects New Zealand's bicultural foundation. The Algorithm Charter for Aotearoa New Zealand, while voluntary, signals clear government expectations for fair, ethical, and transparent AI development.

In July 2025, New Zealand released its first AI Strategy alongside Responsible AI Guidance for Businesses, cementing the shift towards structured AI governance rather than reactive security measures.

The Common Thread

What's striking about both markets is the convergence on a fundamental principle: existing privacy and security laws apply to AI, but AI requires specific governance frameworks to make compliance meaningful.

The EY report notes that companies should focus on protecting data integrity, maintaining supply chain integrity for AI tools, and embedding security considerations into every stage of AI development. These recommendations are sound, but they're incomplete without the governance framework to make them operational across diverse regulatory environments.

ISO 42001 compliance provides that framework. It demonstrates to regulators in both Australia and New Zealand, to customers, and to stakeholders that you're not just using AI, you're governing it responsibly. In environments where trust is currency and regulatory scrutiny is intensifying, that governance becomes your competitive advantage.

From Strategy to Operations: The Comprehensive Approach

This is where Insicon Cyber's comprehensive cybersecurity partnership becomes essential. ISO 42001 compliance isn't a checkbox exercise. It requires deep integration between strategic advisory and operational delivery, adapted to the specific regulatory context of each market.

Our approach to ISO 42001 compliance spans the entire journey:

  • Strategic Foundation: Gap analysis that benchmarks your current AI governance and identifies business risks specific to your industry and operations, with clear mapping to both Australian and New Zealand regulatory requirements.
  • AIMS Development: Working directly with your stakeholders to design and establish your Artificial Intelligence Management System (AIMS), integrating it seamlessly with your existing governance processes while addressing the unique considerations of each market you operate in.
  • Regulatory Alignment: Ensuring your AIMS addresses the specific requirements of Australian regulations (SOCI Act, Privacy Act, Essential Eight) and New Zealand's framework (Privacy Act 2020, IPPs, Office of the Privacy Commissioner expectations), with particular attention to New Zealand's te ao Māori considerations where relevant.
  • Operational Implementation: Hands-on support with change management, tool selection, and policy deployment that actually works in your environment, whether you're operating in one market or both.
  • Continuous Partnership: Ongoing compliance support, annual reviews, and rapid response when regulatory changes or incidents occur in either jurisdiction.

This isn't security consulting. This isn't managed services. It's the integrated partnership that connects boardroom strategy to operational reality, ensuring your AI governance works as hard as your business does across both markets.

The Choice Trans-Tasman Organisations Face

The EY report should serve as a wake-up call, but not in the way most will interpret it. The answer isn't more security tools. It isn't faster incident response. It isn't even better threat detection.

The answer is accepting that AI security begins with AI governance. It's recognising that the window for reactive approaches has closed. It's understanding that organisations which establish robust governance frameworks now will be the ones still standing when both regulatory environments fully mature.

Businesses across Australia and New Zealand have a choice: continue patching security gaps in AI systems that are evolving faster than your controls, or establish the governance frameworks that make AI a managed, compliant, strategic asset.

At Insicon Cyber, we've made our position clear. The future of AI in trans-Tasman organisations isn't about securing every possible vulnerability. It's about governing AI with the same rigour we've applied to information security, but with frameworks designed specifically for AI's unique challenges.

ISO 42001 provides that framework. The question is whether your organisation will adopt it proactively or reactively.

Regional Expertise, Global Standards

What makes ISO 42001 particularly powerful for trans-Tasman organisations is its international recognition combined with the flexibility to address regional regulatory requirements. Whether you're navigating Australia's more prescriptive security requirements or New Zealand's principles-based privacy approach, ISO 42001 provides the management system that bridges both.

Our experience with ISO 27001 across both markets has taught us that successful certification isn't about imposing generic frameworks. It's about understanding the nuances of each regulatory environment while building management systems that scale across jurisdictions. We bring that same expertise to ISO 42001 implementation.

For organisations operating across the Tasman, this means a single AIMS that addresses both Australian and New Zealand requirements, reducing complexity rather than adding to it. It means policies that work in both regulatory contexts. It means audit readiness that satisfies stakeholders in both markets.

Taking Action

If you're a CEO looking at that 14% statistic and wondering if your AI systems adequately protect sensitive data, the answer is almost certainly no. If you're a CISO managing 47 security tools and watching breakout times shrink, you know the current approach isn't sustainable.

The path forward requires honest assessment, committed governance, and comprehensive partnership. It requires accepting that AI governance is now a boardroom issue, not just a technical one. And increasingly, it requires understanding that regulatory expectations in both Australia and New Zealand are converging on the same fundamental requirement: demonstrate that your AI systems are governed, not just secured.

From boardroom strategy to operational excellence, Insicon Cyber helps organisations across Australia and New Zealand navigate the transition from AI security chaos to governance-driven control. We bring deep ISO 27001 experience to ISO 42001 implementation, ensuring your AIMS is robust, auditable, and genuinely effective in both markets.

The threat landscape won't slow down. The regulatory environments won't become simpler. But with the right governance framework and the right partner, trans-Tasman organisations can turn AI from their biggest vulnerability into their most governed capability.

Ready to move beyond security patchwork to comprehensive AI governance?

Let's have that conversation.


Sources:

About Matt Miller

Matt Miller is CEO of Insicon Cyber, Australia and New Zealand's trusted cybersecurity partner delivering comprehensive solutions from executive advisory to managed security operations. With deep expertise in both strategic governance and operational delivery across trans-Tasman markets, Matt helps organisations navigate complex cybersecurity challenges including AI governance, regulatory compliance, and adaptive security operations.

AI Governance: The Next Cyber Security Frontier for Australia and New Zealand

AI Governance: The Next Cyber Security Frontier for Australia and New Zealand

Artificial intelligence is reshaping business across Australia and New Zealand at remarkable speed.

Read More
The Data Deluge: How CISOs Can Harness Cyber Risk Insights to Drive Actionable Change

The Data Deluge: How CISOs Can Harness Cyber Risk Insights to Drive Actionable Change

The role of the Chief Information Security Officer (CISO) has evolved into a delicate balance between managing technical complexities and aligning...

Read More
The Silent Threat: How EchoLeak Exposes the Hidden Risks in AI

The Silent Threat: How EchoLeak Exposes the Hidden Risks in AI

When AI tools turn against your business without anyone lifting a finger Imagine opening your Monday morning executive briefing to discover that your...

Read More