Blog | Insicon Cyber

AI Governance: The Next Cyber Security Frontier for Australia and New Zealand

Written by Insicon Cyber | 13/10/25 10:36 PM

Artificial intelligence is reshaping business across Australia and New Zealand at remarkable speed.

More than half of Australian enterprises have already embedded AI into their IT and business strategies, representing one of the highest adoption rates globally. New Zealand organisations are following similar trajectories, with businesses across both nations recognising AI's transformative potential. But this acceleration brings sobering realities: 78% of Australians are concerned about negative AI outcomes, and only 36% trust the technology. More telling still, 54% of Australian organisations say reducing AI security and legal risks is "very" or "extremely" difficult.

This tension between rapid adoption and uncertain governance defines the current AI moment across the Tasman. As Cyber Security Awareness Month in Australia and Cyber Smart Week in New Zealand highlight the importance of building cyber safe cultures, AI governance has emerged as a critical frontier where technology, security, privacy, and legal considerations converge in complex ways.

The question facing businesses in both nations isn't whether to adopt AI. That decision has largely been made. The question is how to govern AI systems securely, compliantly, and ethically while extracting their considerable business value.

 

The Convergence of Privacy, Security, and Legal

AI governance has introduced what experts call "the new triad": privacy, cyber security, and legal working together as essential pillars. Previously, privacy and security collaborated to protect data. AI's complexity demands that legal expertise becomes equally central.

This convergence happens because AI systems amplify traditional risks while creating entirely new ones. Consider the challenge of data governance. AI systems require vast datasets for training and operation. These datasets often contain personal information subject to the Privacy Act and Australian Privacy Principles. Using this data for AI purposes that weren't contemplated when it was collected creates immediate privacy questions. Organisations must assess whether their current privacy policies permit AI use cases and whether additional consent or transparency measures are required.

Security challenges multiply in AI environments. Data supply chain risks emerge when training data comes from multiple sources, some potentially compromised. Maliciously modified data can poison AI models, causing them to produce biased or incorrect outputs. Data drift occurs when the statistical properties of data change over time, degrading model performance and potentially creating security vulnerabilities.

Legal considerations add another layer. AI systems can infringe copyright during training if they consume protected works without authorisation. They can generate outputs that reproduce copyrighted material, creating liability for organisations that use these outputs. In Australia, where copyright protection is automatic but requires human authorship, AI-generated content may lack protection entirely, meaning competitors can freely use it.

The regulatory landscape remains unsettled. While Europe has implemented the AI Act, Australia has not yet introduced AI-specific legislation. However, existing laws around intellectual property, privacy, and industry-specific requirements apply to AI systems. This means organisations must navigate ambiguity while remaining compliant with established frameworks.

The Global Movement Towards AI Governance Standards

The international community is rapidly developing comprehensive AI governance frameworks, with the EU leading regulatory efforts and global standards bodies providing implementation guidance.

The EU AI Act: First Comprehensive AI Regulation

The EU AI Act, which entered into force on 1 August 2024, represents the world's first comprehensive regulatory framework for AI. The Act defines four key risk categories for AI systems, with different regulatory requirements triggered at 6-12 month intervals. As of 2 August 2025, general-purpose AI (GPAI) models must abide by specific rules mandating transparency, technical documentation, and disclosure of copyrighted material used during training. The Act will be fully applicable by 2 August 2026.

Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance. Like the GDPR before it, experts anticipate the EU AI Act will spur the development of AI governance and ethics standards worldwide.

For businesses operating globally, including those in Australia and New Zealand, the EU AI Act's extraterritorial reach means that organisations providing AI systems or services to EU markets must comply with its requirements, regardless of where they're based.

 

ISO/IEC 42001: The Global AI Management Standard

ISO/IEC 42001:2023 is the world's first AI management system standard, published in December 2023. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. The standard addresses unique challenges AI poses, including ethical considerations, transparency, bias identification and mitigation, and continuous learning.

ISO 42001 is designed for entities providing or utilising AI-based products or services, ensuring responsible development and use of AI systems. The standard covers issues throughout the AI system lifecycle, from the initial concept phase to the final deployment and operation.

Key requirements include:

  • Leadership commitment and policy establishment
  • Risk-based planning and assessment
  • Resource allocation and training programs
  • 38 specific controls that organizations must comply with during assessment
  • Continual improvement processes

ISO 42001 integrates with existing security and compliance frameworks, including ISO 27001, ISO 9001, and aligns with the NIST AI Risk Management Framework and OECD AI principles. For organisations across Australia and New Zealand, implementing ISO 42001 establishes policies and procedures that align with current regulatory requirements and anticipated future standards, preparing them for evolving AI regulations globally.

Convergence of Global AI Governance

According to the May 2025 Global AI legislation tracker, countries around the world are developing and implementing AI governance legislation and policies, with legislative mentions of AI rising 21.3% across 75 countries since 2023, marking a ninefold increase since 2016. This includes comprehensive laws, specific use-case regulations, and voluntary guidelines.

For Australian and New Zealand businesses, this global convergence means:

  • Growing alignment between regional frameworks (EU AI Act, Australia's emerging guidance, New Zealand's principles)
  • International standards like ISO 42001 providing common implementation language
  • Five Eyes collaboration on AI security guidance
  • Increasing expectation of AI governance maturity from customers, investors, and regulators worldwide

Trans-Tasman AI Security Guidance Within the Global Context

Both Australia and New Zealand have published guidance on secure AI engagement, reflecting shared concerns within the Five Eyes alliance while operating within this broader global governance movement. Australia's ASD has developed guidance on engaging with AI systems securely, emphasising the importance of treating AI as another technology requiring appropriate security controls. The guidance was developed in collaboration with international partners including the UK's NCSC, US agencies, and New Zealand's NCSC.

New Zealand's government has published interim guidance on the use of generative AI in the public service, including 10 principles for trustworthy use. The New Zealand NCSC collaborates closely with Australian counterparts and other Five Eyes partners on AI security challenges, contributing to joint guidelines for secure AI system development.

For organisations across both nations, understanding the interplay between global standards (ISO 42001), regional regulations (EU AI Act), and local guidance (ASD/NCSC frameworks) has become essential. This tri-level approach to compliance ensures organisations meet immediate local requirements while preparing for international obligations.

The cybersecurity challenges posed by AI systems differ from traditional IT threats in important ways, and businesses across Australia and New Zealand face similar vulnerabilities. AI introduces unique attack vectors that require specialised understanding and mitigation.

Adversarial attacks manipulate AI model inputs to cause incorrect outputs or behaviours. These attacks can be subtle, changing data in ways imperceptible to humans but that fundamentally alter AI decision-making. For organisations using AI in security-critical applications, these vulnerabilities pose serious risks.

Model theft represents another threat category. Attackers can extract AI models through various techniques, stealing intellectual property and potentially discovering vulnerabilities to exploit. For Australian businesses investing heavily in AI development, protecting these assets becomes paramount.

Data poisoning attacks corrupt training datasets, causing AI systems to learn incorrect patterns or behaviours. These attacks can be difficult to detect and remediate, especially when poisoning occurs early in the training process.

Perhaps most concerning for many organisations is the tendency of generative AI tools to fabricate information confidently and persuasively. Users must remain sceptical of AI outputs, validating them carefully before making decisions or sharing information. This "hallucination" problem represents both a security risk and a governance challenge.

AI-driven attacks against organisations are also evolving rapidly. Threat actors use AI to generate convincing phishing content, automate vulnerability discovery, create deepfake media for social engineering, and evade detection systems. The attack surface is expanding as quickly as defensive capabilities.

The Governance Gap

Despite widespread AI adoption, governance capabilities lag dangerously. Recent research found that 93% of Australian organisations cannot effectively measure AI return on investment. Nearly half have received no formal AI training. Only 29% of businesses are implementing AI safely, even though 78% believe they're doing it right.

This gap between perception and reality creates significant risk. Organisations assume they're managing AI appropriately when fundamental governance structures remain absent or inadequate.

Effective AI governance requires several foundational elements

  1. Clear policies that define acceptable and unacceptable AI uses, establish data handling requirements, specify human oversight obligations, and create accountability for AI-generated decisions.
  2. Appropriate technical controls including access management for AI systems, logging and monitoring of AI operations, data classification and protection, and security testing specifically designed for AI vulnerabilities.
  3. Cross-functional collaboration between security teams that understand AI attack vectors and defensive measures, privacy professionals who ensure compliance with data protection requirements, legal experts who navigate regulatory obligations and liability questions, and business leaders who balance innovation with risk management.
  4. Transparency and explainability. Organisations need mechanisms to understand how AI systems make decisions, validate outputs before acting on them, identify when AI-generated content is being used, and respond to questions about AI's role in business processes.

Six Key Steps to Build Robust AI Governance Frameworks

For businesses across Australia and New Zealand developing AI governance capabilities, several strategic approaches have proven effective, drawing on guidance from both nations' cyber security centres:

  1. Start with risk-based assessments that identify which AI systems handle sensitive data, make consequential decisions, interact with customers or critical infrastructure, or present significant privacy or security risks. Both Australian and New Zealand frameworks emphasise proportionate responses based on risk levels. High-risk systems demand more rigorous governance than low-risk applications.
  2. Implement data governance frameworks that establish clear data classification and handling requirements, ensure compliance with Australian Privacy Principles and New Zealand's Privacy Act 2020, address data minimisation and purpose limitation principles consistent with GDPR-style approaches, and monitor for bias, fairness, and transparency concerns in line with ISO 42001 requirements. The trans-Tasman approach to privacy regulation shares common principles, making coordinated governance more straightforward.
  3. Deploy AI-specific security programs that protect against adversarial attacks and model theft, implement robust logging and monitoring aligned with ISO 42001's traceability requirements, conduct regular security assessments of AI systems, ensure third-party AI services meet security requirements, and address the technical documentation obligations similar to those under the EU AI Act for GPAI models.
  4. Consider ISO 42001 certification as a strategic approach to demonstrating AI governance maturity. The standard provides a structured framework that addresses accountability, transparency, fairness, and risk management throughout the AI lifecycle. Achieving ISO 42001 certification demonstrates to stakeholders, customers, and regulators that your organisation has implemented comprehensive AI governance, aligning with global best practices and preparing for emerging regulatory requirements worldwide.
  5. Establish human oversight mechanisms that require qualified human review of AI outputs before use, maintain appropriate control over AI decision-making, create escalation paths when AI systems behave unexpectedly, and preserve human accountability for AI-generated outcomes.
  6. Develop comprehensive training programs that educate staff about AI capabilities and limitations, build awareness of AI-specific security threats, teach best practices for working with AI systems, and foster critical thinking about AI-generated information.

The Regulatory Context Across Both Nations

While Australia and New Zealand lack AI-specific legislation, multiple frameworks influence how organisations must govern AI systems in both jurisdictions.

Australian Framework: The Australian Privacy Principles establish requirements for handling personal information that apply equally to AI systems. Industry-specific regulations create additional obligations, with ASIC publishing regulatory guides for financial services licensees using AI, and professional bodies establishing practice rules. The Information Security Manual and Essential Eight Maturity Model provide comprehensive cybersecurity frameworks applicable to AI systems.

New Zealand Framework: New Zealand's Privacy Act 2020 creates similar obligations around personal information handling. The NZISM provides security guidance that encompasses AI systems. The NCSC's Critical Controls framework applies to protecting AI infrastructure and data. New Zealand's government has published interim guidance for public service use of generative AI, establishing principles that many private sector organisations are adopting voluntarily.

Shared Approaches: Both nations emphasise the importance of treating AI systems within existing security and privacy frameworks rather than viewing them as entirely separate challenges. ASD and New Zealand's NCSC collaborate through Five Eyes partnerships, contributing to joint international guidance on secure AI system development. This cooperation means organisations operating across both jurisdictions benefit from aligned threat intelligence and best practices.

Global Standards Alignment: The EU AI Act and ISO 42001 provide additional layers of governance guidance that complement trans-Tasman frameworks. While Australia and New Zealand lack AI-specific legislation similar to the EU AI Act, organisations should monitor these developments as they signal the direction of global AI governance. Implementing ISO 42001 prepares organisations for anticipated regulatory evolution while demonstrating governance maturity to international stakeholders.

The government has also indicated that mandatory AI guardrails for high-risk situations may be forthcoming, with emphasis on better data governance. New Zealand is similarly considering enhanced regulatory approaches. Organisations that build strong governance capabilities now, potentially including ISO 42001 certification, will be better positioned when regulatory requirements evolve in both jurisdictions and can more readily address EU AI Act compliance if they operate in European markets.

Seven Practical Steps for Businesses Across Australia and New Zealand

For organisations navigating AI governance challenges in either or both countries, several concrete actions can strengthen security and compliance posture.

  1. Conduct comprehensive inventories of AI systems in use across your organisation, including shadow AI where employees use consumer AI tools without official approval. For trans-Tasman operations, map AI usage across both jurisdictions. Understanding what exists represents the essential first step.
  2. Develop and implement clear AI use policies that establish guardrails while enabling innovation. These policies should address acceptable uses in both Australian and New Zealand contexts, data handling requirements under both privacy regimes, human oversight expectations, and security controls.
  3. Assess AI systems against privacy requirements in both nations, ensuring compliance with Australian Privacy Principles and New Zealand's Privacy Act 2020. This includes reviewing whether personal data is being used for purposes contemplated when collected and implementing appropriate transparency measures consistent with both frameworks.
  4. Implement security controls specifically designed for AI systems, including protections against adversarial attacks, monitoring for unusual AI behaviour or outputs, and regular testing of AI security.
  5. Establish governance processes that bring together privacy, security, and legal perspectives. AI governance cannot succeed in silos. Cross-functional collaboration ensures comprehensive risk management.
  6. Invest in training and awareness programs that build organisational capability around AI. This includes technical training for security professionals, governance training for business leaders, and general awareness training for all staff.
  7. Consider working with partners who understand both AI technology and regulatory requirements across Australia and New Zealand. AI governance is complex and evolving. Comprehensive support from experienced advisors who operate in both jurisdictions, such as Insicon Cyber) can accelerate capability development while reducing risk.

The Adaptive Approach

AI technology evolves rapidly. Governance frameworks must be equally adaptive, incorporating new learnings, responding to emerging threats, adjusting to regulatory changes, and balancing security with innovation.

This adaptive approach means avoiding rigid governance structures that become obsolete quickly. Instead, establish principles-based frameworks that provide clear direction while allowing flexibility. Maintain regular review cycles that assess whether governance measures remain appropriate. Foster cultures where learning from incidents strengthens future practices rather than triggering blame.

Australasian businesses that develop adaptive AI governance capabilities gain significant advantages. They can deploy AI systems confidently, knowing appropriate safeguards exist. They can respond quickly to new requirements or threats. They can demonstrate to customers, regulators, and stakeholders that they take AI responsibility seriously.

Looking Forward

AI represents both tremendous opportunity and considerable challenge for businesses across Australia and New Zealand. The technology's potential to improve efficiency, enable new capabilities, and deliver competitive advantage is real. So are the risks around security, privacy, compliance, and ethical use.

Successfully navigating this frontier requires acknowledging complexity honestly while taking concrete action. It demands bringing together expertise from security, privacy, legal, and business domains. It requires viewing AI governance not as compliance burden but as enabler of safe innovation.

As AI adoption continues accelerating across both nations, the organisations that thrive will be those that build robust governance early, adapt as the landscape evolves, and demonstrate through their actions that AI can be deployed responsibly, securely, and in alignment with Australian and New Zealand values and regulatory expectations. The close collaboration between ASD's ACSC and New Zealand's NCSC through Five Eyes partnerships provides a foundation of shared intelligence and best practices that organisations can leverage.


Ready to strengthen your AI governance?

The conversation starts with understanding your current AI landscape across both jurisdictions, assessing gaps in governance capabilities, and developing integrated approaches that connect security, privacy, and legal considerations into coherent frameworks that work across the Tasman.

Insicon Cyber helps businesses across Australia and New Zealand navigate AI governance challenges through comprehensive advisory services and ongoing support. Our integrated approach brings together cyber security expertise, regulatory knowledge across both jurisdictions, and operational capability to ensure AI systems deliver value while managing risk effectively.

Share your AI governance insights using #CyberMonth2025 (Australia) and #CyberSmartWeek (New Zealand)

Sources

This Insicon Cyber Insight draws on research and reporting from:

Australian Sources:

New Zealand Sources:

International Sources: