Artificial intelligence is reshaping business across Australia and New Zealand at remarkable speed.
More than half of Australian enterprises have already embedded AI into their IT and business strategies, representing one of the highest adoption rates globally. New Zealand organisations are following similar trajectories, with businesses across both nations recognising AI's transformative potential. But this acceleration brings sobering realities: 78% of Australians are concerned about negative AI outcomes, and only 36% trust the technology. More telling still, 54% of Australian organisations say reducing AI security and legal risks is "very" or "extremely" difficult.
This tension between rapid adoption and uncertain governance defines the current AI moment across the Tasman. As Cyber Security Awareness Month in Australia and Cyber Smart Week in New Zealand highlight the importance of building cyber safe cultures, AI governance has emerged as a critical frontier where technology, security, privacy, and legal considerations converge in complex ways.
The question facing businesses in both nations isn't whether to adopt AI. That decision has largely been made. The question is how to govern AI systems securely, compliantly, and ethically while extracting their considerable business value.
AI governance has introduced what experts call "the new triad": privacy, cyber security, and legal working together as essential pillars. Previously, privacy and security collaborated to protect data. AI's complexity demands that legal expertise becomes equally central.
This convergence happens because AI systems amplify traditional risks while creating entirely new ones. Consider the challenge of data governance. AI systems require vast datasets for training and operation. These datasets often contain personal information subject to the Privacy Act and Australian Privacy Principles. Using this data for AI purposes that weren't contemplated when it was collected creates immediate privacy questions. Organisations must assess whether their current privacy policies permit AI use cases and whether additional consent or transparency measures are required.
Security challenges multiply in AI environments. Data supply chain risks emerge when training data comes from multiple sources, some potentially compromised. Maliciously modified data can poison AI models, causing them to produce biased or incorrect outputs. Data drift occurs when the statistical properties of data change over time, degrading model performance and potentially creating security vulnerabilities.
Legal considerations add another layer. AI systems can infringe copyright during training if they consume protected works without authorisation. They can generate outputs that reproduce copyrighted material, creating liability for organisations that use these outputs. In Australia, where copyright protection is automatic but requires human authorship, AI-generated content may lack protection entirely, meaning competitors can freely use it.
The regulatory landscape remains unsettled. While Europe has implemented the AI Act, Australia has not yet introduced AI-specific legislation. However, existing laws around intellectual property, privacy, and industry-specific requirements apply to AI systems. This means organisations must navigate ambiguity while remaining compliant with established frameworks.
The international community is rapidly developing comprehensive AI governance frameworks, with the EU leading regulatory efforts and global standards bodies providing implementation guidance.
The EU AI Act, which entered into force on 1 August 2024, represents the world's first comprehensive regulatory framework for AI. The Act defines four key risk categories for AI systems, with different regulatory requirements triggered at 6-12 month intervals. As of 2 August 2025, general-purpose AI (GPAI) models must abide by specific rules mandating transparency, technical documentation, and disclosure of copyrighted material used during training. The Act will be fully applicable by 2 August 2026.
Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance. Like the GDPR before it, experts anticipate the EU AI Act will spur the development of AI governance and ethics standards worldwide.
For businesses operating globally, including those in Australia and New Zealand, the EU AI Act's extraterritorial reach means that organisations providing AI systems or services to EU markets must comply with its requirements, regardless of where they're based.
ISO/IEC 42001:2023 is the world's first AI management system standard, published in December 2023. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. The standard addresses unique challenges AI poses, including ethical considerations, transparency, bias identification and mitigation, and continuous learning.
ISO 42001 is designed for entities providing or utilising AI-based products or services, ensuring responsible development and use of AI systems. The standard covers issues throughout the AI system lifecycle, from the initial concept phase to the final deployment and operation.
Key requirements include:
ISO 42001 integrates with existing security and compliance frameworks, including ISO 27001, ISO 9001, and aligns with the NIST AI Risk Management Framework and OECD AI principles. For organisations across Australia and New Zealand, implementing ISO 42001 establishes policies and procedures that align with current regulatory requirements and anticipated future standards, preparing them for evolving AI regulations globally.
According to the May 2025 Global AI legislation tracker, countries around the world are developing and implementing AI governance legislation and policies, with legislative mentions of AI rising 21.3% across 75 countries since 2023, marking a ninefold increase since 2016. This includes comprehensive laws, specific use-case regulations, and voluntary guidelines.
For Australian and New Zealand businesses, this global convergence means:
Both Australia and New Zealand have published guidance on secure AI engagement, reflecting shared concerns within the Five Eyes alliance while operating within this broader global governance movement. Australia's ASD has developed guidance on engaging with AI systems securely, emphasising the importance of treating AI as another technology requiring appropriate security controls. The guidance was developed in collaboration with international partners including the UK's NCSC, US agencies, and New Zealand's NCSC.
New Zealand's government has published interim guidance on the use of generative AI in the public service, including 10 principles for trustworthy use. The New Zealand NCSC collaborates closely with Australian counterparts and other Five Eyes partners on AI security challenges, contributing to joint guidelines for secure AI system development.
For organisations across both nations, understanding the interplay between global standards (ISO 42001), regional regulations (EU AI Act), and local guidance (ASD/NCSC frameworks) has become essential. This tri-level approach to compliance ensures organisations meet immediate local requirements while preparing for international obligations.
The cybersecurity challenges posed by AI systems differ from traditional IT threats in important ways, and businesses across Australia and New Zealand face similar vulnerabilities. AI introduces unique attack vectors that require specialised understanding and mitigation.
Adversarial attacks manipulate AI model inputs to cause incorrect outputs or behaviours. These attacks can be subtle, changing data in ways imperceptible to humans but that fundamentally alter AI decision-making. For organisations using AI in security-critical applications, these vulnerabilities pose serious risks.
Model theft represents another threat category. Attackers can extract AI models through various techniques, stealing intellectual property and potentially discovering vulnerabilities to exploit. For Australian businesses investing heavily in AI development, protecting these assets becomes paramount.
Data poisoning attacks corrupt training datasets, causing AI systems to learn incorrect patterns or behaviours. These attacks can be difficult to detect and remediate, especially when poisoning occurs early in the training process.
Perhaps most concerning for many organisations is the tendency of generative AI tools to fabricate information confidently and persuasively. Users must remain sceptical of AI outputs, validating them carefully before making decisions or sharing information. This "hallucination" problem represents both a security risk and a governance challenge.
AI-driven attacks against organisations are also evolving rapidly. Threat actors use AI to generate convincing phishing content, automate vulnerability discovery, create deepfake media for social engineering, and evade detection systems. The attack surface is expanding as quickly as defensive capabilities.
Despite widespread AI adoption, governance capabilities lag dangerously. Recent research found that 93% of Australian organisations cannot effectively measure AI return on investment. Nearly half have received no formal AI training. Only 29% of businesses are implementing AI safely, even though 78% believe they're doing it right.
This gap between perception and reality creates significant risk. Organisations assume they're managing AI appropriately when fundamental governance structures remain absent or inadequate.
For businesses across Australia and New Zealand developing AI governance capabilities, several strategic approaches have proven effective, drawing on guidance from both nations' cyber security centres:
While Australia and New Zealand lack AI-specific legislation, multiple frameworks influence how organisations must govern AI systems in both jurisdictions.
Australian Framework: The Australian Privacy Principles establish requirements for handling personal information that apply equally to AI systems. Industry-specific regulations create additional obligations, with ASIC publishing regulatory guides for financial services licensees using AI, and professional bodies establishing practice rules. The Information Security Manual and Essential Eight Maturity Model provide comprehensive cybersecurity frameworks applicable to AI systems.
New Zealand Framework: New Zealand's Privacy Act 2020 creates similar obligations around personal information handling. The NZISM provides security guidance that encompasses AI systems. The NCSC's Critical Controls framework applies to protecting AI infrastructure and data. New Zealand's government has published interim guidance for public service use of generative AI, establishing principles that many private sector organisations are adopting voluntarily.
Shared Approaches: Both nations emphasise the importance of treating AI systems within existing security and privacy frameworks rather than viewing them as entirely separate challenges. ASD and New Zealand's NCSC collaborate through Five Eyes partnerships, contributing to joint international guidance on secure AI system development. This cooperation means organisations operating across both jurisdictions benefit from aligned threat intelligence and best practices.
Global Standards Alignment: The EU AI Act and ISO 42001 provide additional layers of governance guidance that complement trans-Tasman frameworks. While Australia and New Zealand lack AI-specific legislation similar to the EU AI Act, organisations should monitor these developments as they signal the direction of global AI governance. Implementing ISO 42001 prepares organisations for anticipated regulatory evolution while demonstrating governance maturity to international stakeholders.
The government has also indicated that mandatory AI guardrails for high-risk situations may be forthcoming, with emphasis on better data governance. New Zealand is similarly considering enhanced regulatory approaches. Organisations that build strong governance capabilities now, potentially including ISO 42001 certification, will be better positioned when regulatory requirements evolve in both jurisdictions and can more readily address EU AI Act compliance if they operate in European markets.
For organisations navigating AI governance challenges in either or both countries, several concrete actions can strengthen security and compliance posture.
AI technology evolves rapidly. Governance frameworks must be equally adaptive, incorporating new learnings, responding to emerging threats, adjusting to regulatory changes, and balancing security with innovation.
This adaptive approach means avoiding rigid governance structures that become obsolete quickly. Instead, establish principles-based frameworks that provide clear direction while allowing flexibility. Maintain regular review cycles that assess whether governance measures remain appropriate. Foster cultures where learning from incidents strengthens future practices rather than triggering blame.
Australasian businesses that develop adaptive AI governance capabilities gain significant advantages. They can deploy AI systems confidently, knowing appropriate safeguards exist. They can respond quickly to new requirements or threats. They can demonstrate to customers, regulators, and stakeholders that they take AI responsibility seriously.
AI represents both tremendous opportunity and considerable challenge for businesses across Australia and New Zealand. The technology's potential to improve efficiency, enable new capabilities, and deliver competitive advantage is real. So are the risks around security, privacy, compliance, and ethical use.
Successfully navigating this frontier requires acknowledging complexity honestly while taking concrete action. It demands bringing together expertise from security, privacy, legal, and business domains. It requires viewing AI governance not as compliance burden but as enabler of safe innovation.
As AI adoption continues accelerating across both nations, the organisations that thrive will be those that build robust governance early, adapt as the landscape evolves, and demonstrate through their actions that AI can be deployed responsibly, securely, and in alignment with Australian and New Zealand values and regulatory expectations. The close collaboration between ASD's ACSC and New Zealand's NCSC through Five Eyes partnerships provides a foundation of shared intelligence and best practices that organisations can leverage.
The conversation starts with understanding your current AI landscape across both jurisdictions, assessing gaps in governance capabilities, and developing integrated approaches that connect security, privacy, and legal considerations into coherent frameworks that work across the Tasman.
Insicon Cyber helps businesses across Australia and New Zealand navigate AI governance challenges through comprehensive advisory services and ongoing support. Our integrated approach brings together cyber security expertise, regulatory knowledge across both jurisdictions, and operational capability to ensure AI systems deliver value while managing risk effectively.
Share your AI governance insights using #CyberMonth2025 (Australia) and #CyberSmartWeek (New Zealand)
This Insicon Cyber Insight draws on research and reporting from:
Australian Sources:
New Zealand Sources:
International Sources: