Your employees are already using AI. The question is whether you know about it.
The gap between employee AI adoption and organisational AI governance is the single biggest AI risk most Australian businesses face today. It is not a hypothetical risk. It is a measured, quantified exposure that is growing every quarter.
According to Microsoft's 2024 Work Trend Index, 75% of knowledge workers are already using generative AI, with usage nearly doubling in six months. Yet only 28% of organisations have formal AI policies in place. That means three quarters of your workforce are making daily decisions about what data to share with AI tools, with no organisational guidance on what's acceptable.
The Australian picture is no different. The Australian Government's AI Adoption Tracker recorded 40% of SMEs adopting AI by Q4 2024, while AWS research estimated 1.3 million Australian businesses regularly using AI tools. But adoption doesn't mean governance. The Reserve Bank's 2025 survey of medium-to-large firms found that for the largest group of adopters, usage was minimal: summarising emails and drafting text via off-the-shelf products with no formal controls.
This is shadow AI. Employees using personal accounts on free AI platforms, pasting confidential data into tools that explicitly use inputs for model training, and making decisions based on AI outputs that no one has verified. IBM's 2025 Cost of a Data Breach Report found that one in five organisations experienced a breach linked to shadow AI, with those incidents adding USD $670,000 to the average breach cost. Of the organisations that reported AI-related breaches, 97% lacked proper AI access controls.
The productivity benefits are real and measurable. A controlled study found developers using AI coding assistants completed tasks 55.8% faster. PwC's 2025 global workforce survey of nearly 50,000 workers found daily AI users reporting significantly higher productivity (92% vs. 58% for non-users). This is not a technology you can ban. It is a technology you must govern.
AI adoption without AI governance is not innovation. It is unmanaged risk with your organisation's data, reputation, and regulatory standing as the collateral.
Five lessons from using AI daily in a cybersecurity consultancy.
We didn't start with a perfect AI governance framework. We started by using AI tools, discovering the risks through direct experience, and building controls to address them. Here's what we learned.
Lesson 1: You need an AI security policy before your people need AI.
The moment we started using AI tools for document review and draft preparation, we realised that every team member was making individual decisions about what to share with the platform. Some were careful. Others weren't. Without a documented policy, there was no baseline for acceptable behaviour.
Our AI security policy is not complex, but it is specific. It covers which platforms are approved (and which are explicitly prohibited), what data can and cannot be entered, how outputs must be treated, and what happens when someone is unsure. Every team member completes AI-specific awareness training before they are given access to any AI tool.
Our guardrails are straightforward: Never paste customer data into any AI platform. Our approach is to desensitise documents entirely before they touch an AI tool. Replace client names with <CLIENT>. Remove pricing, detailed scoping, anything that identifies a person, a client, or a system. If you can't desensitise it, don't use AI on it.
Lesson 2: AI outputs are drafts, always.
No document leaves Cliffside before a human peer review. Every AI-assisted output is treated as a draft that must be independently reviewed for accuracy, tone, technical correctness, and fitness for purpose. This is not a suggestion. It is a mandatory step in our quality process.
This matters more than most people realise. AI models hallucinate. Stanford research found that AI models hallucinate on legal queries between 69% and 88% of the time, collectively inventing over 120 non-existent court cases. Even for general business content, hallucination remains structurally inevitable under current LLM architectures. If there are citations or references to research in an AI-assisted output, verifying those citations is part of our peer review process. We have caught fabricated sources, dead links, and misattributed statistics.
Lesson 3: Only use company-authorised AI platforms, and never use free tools.
We pay for every AI tool we use, and we verify that the data we send to the platform is not used for model training. This is a non-negotiable requirement.
The difference between free and enterprise AI tools is not a feature gap. It is a security architecture gap. Free-tier AI platforms typically use your inputs for model training by default. That means your confidential data, your client's data, your proprietary methodologies become part of the model's training corpus, potentially retrievable by other users. Enterprise platforms contractually commit to never training on customer data, offer configurable retention policies, provide audit logging, and support SSO integration.
If a platform you're evaluating offers the option to disable AI services, disable them first and perform a security and risk assessment before enabling them. If in doubt, ask your line manager. If your line manager is unsure, escalate to the security team. The default answer for unassessed AI tools is no.
Lesson 4: Use system-level instructions to standardise outputs.
We configure company-level instructions across our AI platforms. This ensures a reasonable level of standardisation in outputs: consistent formatting, Australian English, appropriate tone, correct terminology. It doesn't replace human review (Lesson 2 still applies), but it reduces the delta between raw AI output and finished deliverable.
Lesson 5: Disclose AI usage transparently.
With team members from diverse professional and cultural backgrounds, AI tools are genuinely useful for standardising language, refining jargon, and maintaining consistent quality across documentation. But we make it clear: if AI was used to edit or assist in producing a document, that's disclosed. Transparency is not a weakness. It's a quality signal that shows your organisation takes AI governance seriously.
Disclaimer: This article was prepared with the assistance of AI tools, consistent with Cliffside's AI-Assisted Content Policy. All content has been reviewed, verified, and approved by qualified cybersecurity professionals.
Six AI risks that should be on every Australian CISO's radar.
The AI threat landscape has matured rapidly. These are not theoretical risks. They are operational realities with quantified financial impact, documented incidents, and regulatory attention.
Shadow AI and data leakage
The most immediate risk is not sophisticated AI-powered attacks. It is your own people sharing sensitive data with AI platforms that have no contractual obligation to protect it. Research shows 78% of AI users bring their own tools to work without IT approval. Cyberhaven's analysis found that 39.7% of all AI interactions now involve sensitive data, up from 10.7% two years earlier. Source code represents 18.7% of sensitive data flowing into AI tools, followed by R&D materials at 17.1%.
The Samsung incident in 2023 made this concrete. Within three weeks of lifting its ban on employee AI usage, Samsung's semiconductor division experienced three separate data leaks. Engineers pasted proprietary source code and internal meeting transcripts into a free AI platform. The data was transmitted to the provider's servers and is irrecoverable. Samsung subsequently banned generative AI tools entirely.
AI-powered social engineering
AI has fundamentally changed the economics of phishing. Research indicates that 82.6% of phishing emails now incorporate AI-generated content, and phishing volume has increased 4,151% since late 2022. IBM researchers demonstrated that AI can create a phishing attack in five minutes that matches the effectiveness of one taking human social engineers 16 hours, at 95% lower cost.
Deepfake technology has progressed from novelty to operational weapon. Deepfake fraud incidents rose 3,000% in 2023, with a further 680% increase in 2024. Voice cloning now requires as little as three seconds of audio to achieve 85% accuracy, and human detection accuracy for AI-generated voices sits at only 60%.
Hallucination and inaccuracy
AI models produce confident, plausible, and entirely fabricated outputs. This is not a bug that will be patched. A 2025 mathematical proof established that hallucinations are structurally inevitable under current LLM architectures. For business-critical decisions, unverified AI outputs are a liability.
Global business losses from AI hallucinations reached an estimated $67.4 billion in 2024, with employees spending an average of 4.3 hours per week verifying AI outputs. The Air Canada chatbot case (covered in section 04) established that organisations are legally liable for information provided by their AI systems, regardless of whether the information is accurate.
Bias in AI outputs
AI models reflect the biases in their training data. University of Washington research testing three leading LLMs on identical resumes found white-associated names were preferred 85% of the time versus 9% for Black-associated names. An estimated 98.4% of Fortune 500 companies now use AI in hiring processes, making this a systemic risk for any organisation using AI in decision-making that affects individuals.
Prompt injection and jailbreaking
The OWASP Top 10 for LLM Applications 2025 ranks prompt injection as the number one risk for AI systems. Crafted inputs can alter LLM behaviour to bypass guardrails, access unauthorised data, or execute unintended actions. This risk is particularly acute for customer-facing AI deployments such as chatbots, virtual assistants, and AI-powered search.
Prompt injection appears in over 73% of production AI deployments during security audits. A medical LLM study found prompt injection attacks succeeded in 94.4% of trials, including scenarios involving dangerous drug recommendations. No single defence is sufficient; this requires defence-in-depth.
Supply chain risk from AI tools and plugins
The OWASP Top 10 identifies AI supply chain as the third highest risk. AI tools depend on third-party models, training data, plugins, and APIs that introduce vulnerabilities outside your direct control. Research has demonstrated that as few as 250 malicious documents can successfully backdoor LLMs, and standard safety training fails to remove these backdoors. IBM's 2025 breach report found supply chain compromises cost USD $4.91 million on average and take 267 days to detect.
Real incidents that demonstrate why AI governance is not optional.
These are not edge cases. They are the predictable consequences of deploying AI without security architecture, governance, or testing.
Every one of these cases shares the same root cause: AI was deployed without adequate security architecture, without proper testing, and without governance that addressed AI-specific risks. The technology worked. The guardrails didn't exist.
ISO/IEC 42001: the management system standard for AI.
If your organisation already holds ISO 27001 certification, you understand the value of a management system approach to risk. ISO/IEC 42001 applies the same discipline to AI.
Published on 18 December 2023, ISO/IEC 42001 is the world's first internationally certifiable management system standard for artificial intelligence. It specifies requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS). It applies to any organisation that develops, deploys, or uses AI-based products or services.
What ISO 42001 covers that ISO 27001 doesn't
ISO 27001 focuses on information security: confidentiality, integrity, and availability. ISO 42001 addresses the risks unique to AI systems: ethics, transparency, fairness, bias mitigation, explainability, safety, and human oversight. An organisation can be fully ISO 27001 certified and still have zero governance over how its AI systems make decisions, what data they were trained on, or whether their outputs are fair.
Structure and controls
ISO 42001 follows the Harmonised Structure (Annex SL), the same clause architecture as ISO 27001 and ISO 9001. Clauses 4 through 10 are mandatory and cover the familiar territory: context, leadership, planning, support, operation, performance evaluation, and improvement.
Annex A contains 38 controls across nine domains:
| Domain | Focus area |
|---|---|
| A.2 | AI policies — documentation, alignment, and review of AI-specific policies |
| A.3 | Internal organisation — roles, responsibilities, and accountability for AI governance |
| A.4 | Resources for AI systems — data, tooling, computing, and human expertise requirements |
| A.5 | AI impact assessment — technical and societal consequence evaluation |
| A.6 | AI system lifecycle — full lifecycle management from design through decommissioning |
| A.7 | Data for AI systems — data governance, quality, and provenance management |
| A.8 | Information for interested parties — transparency and stakeholder communication |
| A.9 | Use of AI systems — boundaries, safeguards, and intended purpose alignment |
| A.10 | Third-party and customer relationships — supplier management and shared responsibilities |
Annex B provides implementation guidance (analogous to ISO 27002), while Annex C lists AI objectives and risk sources, and Annex D covers cross-sector application.
Integration with ISO 27001
For organisations already certified to ISO 27001, the integration path is efficient. You can leverage existing risk assessment processes, internal audit programmes, document management systems, and management review cycles. The structural alignment means a single integrated management system can cover both information security and AI governance.
As the Cloud Security Alliance noted from its auditing experience: common implementation challenges include organisations treating AI risk as a subset of IT risk (it isn't), underestimating the effort required for AI impact assessments, and failing to involve non-technical stakeholders in governance.
Adoption is early but accelerating
ISO 42001 certification is still in its early stages globally. In Australia, the standard gained early attention when the first global certification by BSI was achieved in October 2024. A Cloud Security Alliance benchmark found that 76% of organisations plan to pursue frameworks like ISO 42001, while only 37% currently conduct regular AI risk assessments.
Cliffside has ISO 42001 trained consultants who can advise on readiness assessment, gap analysis, integration with existing ISO 27001 management systems, and implementation planning.
Where Australia stands on AI regulation, and what's coming.
Australia does not currently have AI-specific legislation. The government's approach relies on existing technology-neutral laws supplemented by voluntary guidance, but that is changing.
Current Australian framework
The Australian Government's Voluntary AI Safety Standard, published in September 2024, established 10 voluntary guardrails. In October 2025, this was updated by the Guidance for AI Adoption, condensing the guardrails into six essential practices. The National AI Plan (December 2025) confirmed reliance on existing legal frameworks with incremental amendments rather than new AI-specific legislation.
The OAIC published two significant guidance documents in October 2024 clarifying that the Privacy Act applies to both input and output data of AI systems. Critically, AI-generated information about identifiable individuals, including hallucinations, constitutes “personal information” under the Act. A new automated decision-making disclosure obligation takes effect 10 December 2026.
Financial services face heightened expectations
Global regulation has extraterritorial reach
The EU AI Act, the world's first comprehensive AI regulation, entered into force on 1 August 2024 with phased implementation through 2027. Australian organisations are within scope if they place AI systems on the EU market or if their AI outputs are used by persons in the EU. Penalties reach up to EUR 35 million or 7% of global turnover for prohibited practices.
The NIST AI Risk Management Framework (AI RMF 1.0) organises AI risk management into four functions: Govern, Map, Measure, and Manage. The companion Generative AI Profile (NIST AI-600-1) extends the framework specifically for GenAI risks. Australia's Voluntary AI Safety Standard explicitly references the NIST framework.
The direction is clear. Even where regulation remains voluntary today, regulators expect organisations to demonstrate that they are governing AI proactively within existing frameworks. “We didn't think about AI security” is not a defensible position for any regulated entity.
Secure AI adoption checklist: 10 controls every organisation should implement.
Use this checklist to assess your organisation's AI governance posture. These controls are drawn from ISO 42001, the OWASP Top 10 for LLMs, and our own experience implementing AI governance for Australian organisations.
Cliffside's approach to AI security: from policy to penetration testing.
We don't just advise on AI security. We live it. As an AI-first consultancy, every recommendation we make has been tested against our own operations first. That gives us a practitioner perspective that purely advisory firms lack.
Our AI security capability spans the full lifecycle:
AI security policy and governance
We help organisations build AI acceptable use policies, data classification frameworks for AI interactions, and governance structures aligned to ISO 42001. Our ISO 42001 trained consultants can assess your current AI governance maturity and build a roadmap to a defensible management system. For organisations already holding ISO 27001 certification, we design integrated management systems that cover both information security and AI governance.
AI security architecture
For financial institutions and enterprises building their own AI capabilities, we provide security architecture for AI hubs, platforms, and integrations. This includes threat modelling for AI systems, secure design patterns for LLM deployments, data pipeline security, and architecture review for RAG (Retrieval-Augmented Generation) implementations.
AI penetration testing
We penetration test AI systems, chatbots, and LLM deployments. Our testing methodology is aligned to the OWASP Top 10 for LLM Applications and covers prompt injection, jailbreaking, data leakage, system prompt extraction, bias exploitation, excessive agency, and supply chain risks. We've been testing AI systems since the early days of customer-facing chatbot deployments, when we discovered firsthand how badly things can go wrong without proper guardrails.
AI awareness training
We deliver GenAI-specific awareness training programmes covering safe usage practices, data handling, hallucination risks, and incident reporting. Training is tailored to your organisation's approved tools and policies.
Our approach starts with the security assessment: an honest evaluation of where your organisation stands on AI governance, what gaps exist, and what controls to prioritise. The output is a prioritised roadmap you can use with any provider, including ones we don't work with.
- →AI governance maturity assessment against ISO 42001 and OWASP Top 10
- →Shadow AI exposure analysis and approved tool register design
- →AI acceptable use policy development and staff training
- →Security architecture for AI platforms, hubs, and integrations
- →Penetration testing of AI systems, chatbots, and LLM deployments
- →Transferable report: yours to use, share with auditors, or present to your board