Skip to the main content.

AI Assurance

Know your AI risk before attackers do.

Insicon Cyber AI Assurance is an AI security testing and runtime protection service for Australian and New Zealand organisations deploying AI in regulated environments. Powered by F5 AI Red Team and F5 AI Guardrails. Led by Insicon Cyber's experienced Australia-based team. Executed by the Insicon Cyber aSOC.

What is AI Assurance?

AI Assurance is Insicon Cyber's AI security service that combines automated adversarial testing of AI systems with runtime protection enforcement and fractional CISO advisory. It is built for the class of threats — prompt injection, jailbreak chaining, data exfiltration via AI, multi-agent privilege escalation — that traditional security controls cannot detect.

AI systems are non-deterministic. You cannot firewall a conversation. Web application firewalls, conventional penetration testing, and standard SIEM monitoring were designed for predictable, code-based systems. Large language models, AI agents, and generative AI applications introduce a fundamentally different attack surface — one that requires a different class of testing and a different class of protection.

AI Assurance is available in two tiers: a one-off assessment (AI Assurance: Assess) and an ongoing subscription with continuous automated testing and always-on protection (AI Assurance: Continuous).

The AI security threat landscape in Australia and New Zealand

AI adoption across Australian and New Zealand mid-market organisations is accelerating. Customer service chatbots, internal copilots, HR automation tools, and agentic workflows are being deployed into regulated environments that were designed for a different threat model.

Prompt injection

Prompt injection is an attack technique where a malicious input manipulates an AI system into behaving in an unauthorised way — disclosing sensitive data, bypassing access controls, or executing unintended actions. Prompt injection attacks are semantic in nature; they cannot be blocked by traditional signature-based security controls. For Australian organisations subject to the Privacy Act 1988, a successful prompt injection attack resulting in personal data disclosure may trigger mandatory notification obligations under the Notifiable Data Breaches scheme.

Jailbreak chaining and token compression

Jailbreak attacks use structured conversational sequences to bypass the safety guardrails built into AI models. Token compression attacks hide malicious instructions in formats that AI systems can process but human reviewers cannot see. Both attack techniques are evolving rapidly and require continuous testing updates — not a point-in-time annual assessment — to remain effective as a detection tool.

Agentic AI and multi-agent privilege escalation

Agentic AI systems — workflows where AI agents take autonomous actions on behalf of users — introduce a new category of privilege escalation risk. An agent that is manipulated via prompt injection can execute tool calls, access back-end systems, and interact with other agents in ways the deploying organisation never intended. Documented incidents include cross-organisation data exposure via MCP server logic flaws and back-end system access granted incorrectly via compromised agent trust chains.

Training data poisoning and model integrity

Data poisoning attacks introduce malicious data into the training pipeline of AI models, corrupting their outputs in ways that are difficult to detect post-deployment. For organisations fine-tuning or retraining models on proprietary data — including sensitive client or patient data — training data integrity is a material security obligation.

Powered by F5 AI Red Team and F5 AI Guardrails

F5 AI Red Team

F5 AI Red Team is an automated adversarial testing platform that simulates real-world AI attacks at unprecedented speed and scale. It combines three testing types: agentic resistance testing (dynamic, multi-turn campaigns that emulate sophisticated real-world attackers), signature attacks (tens of thousands of real-world prompts updated every month), and operational attacks (validating resilience under crash, resource exhaustion, and latency stress). Findings are delivered as prioritised remediation reports with security scores, severity classifications, and natural-language explainability of how and why each attack succeeded.

F5 AI Guardrails

F5 AI Guardrails provides runtime protection for AI applications and agents — blocking prompt injection, data exfiltration attempts, and jailbreak techniques as they occur in production. AI Guardrails operates as the enforcement layer that converts Red Team findings into active protection policies. Where Red Team identifies the vulnerability, Guardrails closes it in real time.

AI Assurance: two tiers to match your requirements

AI Assurance: Assess

One-off assessment. Point-in-time. Boardroom-ready output.

A structured, time-boxed AI security assessment that maps your AI attack surface, tests deployed systems against real-world adversarial techniques, and delivers a risk-scored remediation roadmap. Delivered in three to four weeks. Led by a fractional CISO. Executed by the Insicon Cyber aSOC.

What AI Assurance: Assess covers

  • AI system inventory and attack surface mapping across models, applications, and agents
  • Automated adversarial testing: prompt injection, jailbreak chaining, data exfiltration, token compression, multi-agent privilege escalation
  • Operational stress testing: latency, resource exhaustion, and availability under adversarial pressure
  • Risk-scored findings report with severity classifications and attack evidence
  • Prioritised remediation roadmap with fractional CISO commentary
  • Regulatory gap analysis mapped to APRA CPS 234, CPS 230, ISO 42001, and the Australian Privacy Act 1988 or NZ Privacy Act 2020 as applicable

What you receive

A boardroom-ready report with an executive summary suitable for board and audit committee presentation, a technical appendix with full findings, severity classifications, and successful attack evidence, and a CISO-authored remediation roadmap with prioritised actions and regulatory mapping. AI Assurance: Assess findings also serve as the technical evidence base for ISO 42001 gap assessment and risk treatment planning.

AI Assurance: Continuous

Ongoing subscription. Always-on testing. Always-on protection.

Continuous automated adversarial testing of AI systems combined with F5 AI Guardrails runtime protection, integrated into the Insicon Cyber aSOC. AI security posture updates as the threat landscape evolves. Quarterly fractional CISO reviews translate findings into governance reporting for boards and risk committees.

What AI Assurance: Continuous covers

  • Monthly automated F5 AI Red Team campaigns — tens of thousands of real-world attack prompts, updated every month
  • F5 AI Guardrails runtime enforcement: blocking prompt injection, data exfiltration, and jailbreak attempts in production
  • Integration with Insicon Cyber aSOC for alert triage, incident escalation, and 24/7 monitoring
  • Monthly security score trending report and executive summary
  • Quarterly fractional CISO advisory session covering findings, regulatory updates, and board-level risk narrative
  • Continuous alignment to OWASP LLM Top 10, OWASP Agentic Top 10, and ASD AI security guidance

The closed-loop advantage

Every AI Assurance: Continuous campaign automatically feeds findings into F5 AI Guardrails runtime policies. When a new attack technique is identified in testing, the corresponding protection is enforced in production without waiting for the next manual update cycle. This closed loop — test, find, protect, re-test — is the core operational advantage of AI Assurance: Continuous over point-in-time assessments.

From Assess to Continuous

AI Assurance: Assess is the recommended entry point for organisations without an existing AI security testing programme. The Assess engagement establishes the attack surface baseline and generates the findings that configure F5 AI Guardrails for the Continuous tier. Clients who complete an Assess engagement begin AI Assurance: Continuous with a pre-configured protection baseline — accelerating time-to-protection and providing a quantified starting posture for board reporting.

Assess establishes your baseline. Continuous defends it.

How Insicon Cyber delivers AI Assurance

Every AI Assurance engagement is led by an experienced Australia-based practitioner. They translate adversarial testing results into language boards and audit committees can act on. The Insicon Cyber SOC handles technical execution, alert triage, and escalation.

Fractional CISO leadership

Matt Miller (CEO and co-founder) and Greg Bunt (Director and co-founder) have advised boards and executive teams across Australian and New Zealand financial services, aged care, and healthcare organisations for over a decade. Every AI Assurance report is authored and presented by a CISO-level practitioner.

Insicon Cyber aSOC execution

The Insicon Cyber autonomous Security Operations Centre (aSOC) operates 24/7 across Australia and New Zealand, powered by Google SecOps. AI Assurance: Continuous integrates directly with the aSOC, ensuring AI-specific alerts are triaged and escalated within the same operational framework as all other security monitoring. Australian data sovereignty maintained throughout.

ANZ regulatory mapping on every engagement

Every AI Assurance report maps findings to the regulatory obligations relevant to the client's sector and jurisdiction: APRA CPS 234, CPS 230, the Australian Privacy Act 1988, the NZ Privacy Act 2020, and ASD guidance. Boards receive a report they can table. Audit committees receive the evidence trail they need.

AI Assurance and ISO 42001: a natural connection

ISO/IEC 42001:2023 requires organisations to implement AI risk assessment and AI system impact assessment processes. AI Assurance: Assess findings provide the technical risk evidence that directly informs both processes — accelerating ISO 42001 gap assessment and strengthening risk treatment documentation.

Organisations running AI Assurance: Continuous feed monthly security score data and quarterly CISO briefings directly into their ISO 42001 management review process, providing an auditable, continuous evidence base for surveillance audits.

Learn about Insicon Cyber ISO 42001

Find out what attackers would find in your AI systems

Talk to one of Insicon Cyber's fractional CISOs about an AI Assurance: Assess engagement. No-cost initial briefing. Scoped to your AI environment and regulatory obligations.

Part of the Insicon Cyber AI Security & Governance practice, alongside ISO 42001 and Managed Compliance.

Frequently asked questions about AI Assurance

What is AI Assurance?

AI Assurance is Insicon Cyber's AI security testing and runtime protection service. It uses F5 AI Red Team to run automated adversarial campaigns against AI models, applications, and agents — testing for prompt injection, jailbreak chaining, data exfiltration, and multi-agent privilege escalation. Findings feed into F5 AI Guardrails for runtime protection. The service is led by fractional CISOs and executed by the Insicon Cyber aSOC.

How is AI Assurance different from penetration testing?

Traditional penetration testing was designed for deterministic systems — code with predictable inputs and outputs. AI systems are non-deterministic. Prompt injection and jailbreak attacks are semantic problems, not code vulnerabilities. Penetration testing firms test for one class of risk; AI Assurance tests for a different class. They are not the same exercise and one does not substitute for the other. AI Assurance also provides continuous automated testing — a capability that point-in-time penetration testing cannot replicate.

Does our web application firewall protect our AI systems?

No. Web application firewalls were designed for deterministic HTTP traffic — they match patterns against known attack signatures. Prompt injection and jailbreak attacks are embedded in natural language and cannot be blocked by signature matching. You cannot firewall a conversation. F5 AI Guardrails — the runtime protection component of AI Assurance: Continuous — is purpose-built for AI-specific runtime threats.

Our AI vendor says their platform is secure. Do we still need AI Assurance?

Vendor security covers the model and the platform infrastructure — not how your organisation has configured, prompted, integrated, or deployed that model. The attack surface is in the implementation: the system prompts, the tool integrations, the trust boundaries between agents. That implementation is entirely your organisation's responsibility and is entirely untested until you test it. AI Assurance tests the implementation layer, not the underlying platform.

Is AI Assurance relevant to APRA CPS 234 compliance?

Yes. APRA CPS 234 requires regulated entities to maintain information security capabilities commensurate with the risk posed by their systems and material service providers, including AI systems. AI Assurance: Assess provides the technical security evidence that CPS 234 assessments require when AI systems are in scope. AI Assurance: Continuous provides the ongoing testing evidence that demonstrates sustained security capability to prudential reviewers.

How long does AI Assurance: Assess take?

AI Assurance: Assess is typically delivered in three to four weeks from engagement commencement to final report. The timeline depends on the number of AI systems in scope and the complexity of agent integrations. Insicon Cyber provides a scoped timeline at engagement commencement.

Does AI Assurance cover agentic AI systems?

Yes. Agentic AI — workflows where AI agents take autonomous actions, use tools, and interact with other agents — is the most security-critical frontier in AI deployment. AI Assurance specifically tests for agentic attack vectors including multi-agent privilege escalation, cross-agent prompt injection, and MCP server trust boundary violations. F5 AI Red Team's agentic resistance testing is designed for exactly this environment.