AI Security & Governance and ANZ regulatory obligations
Regulatory expectations for AI governance are developing rapidly in Australia and New Zealand. Insicon Cyber monitors these obligations and maps service delivery to them.
APRA CPS 234 — Information Security
CPS 234 requires APRA-regulated entities to maintain information security capabilities commensurate with their risk profile, including the risk posed by material service providers. AI systems embedded in regulated environments — whether built internally or provided by third parties — are subject to CPS 234 assessment. AI Assurance provides the technical security evidence CPS 234 assessments require.
APRA CPS 230 — Operational Resilience
CPS 230 requires APRA-regulated entities to identify, manage, and test operational resilience, including the resilience of AI systems under stress. AI Assurance operational stress testing — covering latency, resource exhaustion, and availability under adversarial pressure — provides the documented evidence CPS 230 resilience assessments require.
Privacy Act 1988 (Australia) and NZ Privacy Act 2020
Data exfiltration through prompt injection is a live privacy risk for any organisation processing personal information through AI systems. AI Assurance adversarial testing includes data exfiltration scenarios specifically designed for Privacy Act-exposed environments. ISO 42001 implementation includes data governance controls and AI system impact assessments aligned to Privacy Act accountability obligations.
ISO/IEC 42001:2023 — AI Management System
ISO 42001 is the international standard for responsible AI management. Certification demonstrates to boards, regulators, and clients that an organisation has implemented a structured framework for AI risk assessment, system impact assessment, and continual improvement. Insicon Cyber implements ISO 42001 as a certified practice, building on proven ISO 27001 methodology.