Policy
Artificial Intelligence Policy
Last updated: March 2026 · Cliffside Cybersecurity Pty Ltd
Our position on AI
AI is transforming cybersecurity. It is also introducing risk categories that most organisations are not equipped to manage. We see both sides every day, and we refuse to pretend it is only one or the other.
Cliffside uses AI in its own operations. We are transparent about where, how, and under what controls. We believe any organisation that advises clients on AI governance should be willing to hold itself to the same standard. This policy sets out exactly what that looks like for us.
We also believe the AI landscape is evolving faster than any static policy can capture. This document reflects our current position. It will change as the technology changes, as the risks change, and as we learn more about both.
How we use AI
We use AI tools to assist with specific operational tasks. Assist is the operative word. AI does not make decisions for Cliffside. Our consultants do.
Current uses include:
- Document drafting: AI may be used to assist with first drafts of internal and customer-facing documents. Every document is reviewed, refined, and signed off by a qualified consultant before delivery. Analysis, findings, and recommendations are always human-generated.
- Language and editing: Spelling, grammar, and language refinement across documentation and communications.
- Meeting transcription: We use AI-powered transcription for client meetings to ensure accurate records and efficient follow-up. See the Meeting Transcription section below for our specific commitments.
- Research and analysis: AI assists with threat research, framework analysis, and technical reference material. Outputs are validated against authoritative sources before being incorporated into client deliverables.
AI does not replace professional judgement. A penetration test finding is not validated by AI. A risk rating is not assigned by AI. An architecture recommendation is not generated by AI and handed to you. Our consultants use AI as a tool the same way they use any other tool: to work more efficiently while maintaining the quality and rigour our clients expect.
How we protect your data
This is the section that matters most, and we know it.
- Enterprise tenancy only: All AI platforms used by Cliffside operate under enterprise agreements. Our data is contractually excluded from model training. We do not use free-tier or consumer AI tools for any client-related work.
- Data desensitisation: Before any client-related information is processed by AI, we remove client-identifying data. Organisation names, individual names, IP addresses, domain names, and any other information that could identify the client or their environment is stripped or replaced with generic placeholders.
- No client data in training: Your data is never used to train, fine-tune, or improve any AI model. This is a contractual obligation with every AI platform we use, and it is non-negotiable.
- Data residency: Where available, we select Australian or Asia-Pacific data regions for AI processing. We are transparent with clients about where processing occurs when asked.
Meeting transcription
We use AI-powered transcription for client meetings. This helps us capture accurate notes, track action items, and deliver better follow-up. We are upfront about this practice and our commitments are straightforward:
- Consent first: We inform clients that transcription is active and seek consent before proceeding. If a client is not comfortable with transcription, we turn it off. No questions asked, no pressure applied.
- Per-meeting control: Consent applies per meeting. A client can be comfortable with transcription in one session and request it be disabled in the next. That is entirely their prerogative.
- Sensitive discussions: For particularly sensitive discussions, we proactively offer to disable transcription even if the client has previously consented.
- Transcript handling: Transcripts are stored in Cliffside's secured systems, subject to the same access controls and retention policies as all other client data.
Approved AI platforms
Cliffside maintains an internal register of approved AI platforms. This register defines which tools our team can use, for what purposes, and under what data classification restrictions.
Platforms are evaluated against the following criteria before approval:
- Enterprise agreement with contractual data protection obligations
- Explicit commitment that inputs are not used for model training
- Acceptable data residency and processing locations
- SOC 2 or equivalent third-party assurance
- Ability to enforce organisational access controls
Unapproved AI tools are prohibited for any client-related work. This includes free-tier versions of platforms that may otherwise have approved enterprise equivalents. We practice what we preach on shadow AI prevention.
What we will not do
Some commitments are clearer as a list of things we refuse to do:
- We will not feed client-identifiable data into any AI system
- We will not use consumer or free-tier AI platforms for client work
- We will not use AI to make risk decisions, assign severity ratings, or sign off on deliverables
- We will not use AI-generated content in client deliverables without human review and validation
- We will not record or transcribe meetings without client consent
- We will not use client data to build, train, or improve any AI system, ours or anyone else's
Our advice to clients
We help organisations adopt AI securely. Our advice is grounded in the same principles we apply to ourselves:
- Use AI. But govern it. AI delivers genuine productivity gains. Refusing to adopt it is not a viable strategy. Adopting it without governance is not a viable strategy either. The answer is controlled adoption with proportionate safeguards.
- Know where your data goes. Understand what your AI tools do with your data. Enterprise agreements, not marketing pages. Read the data processing addendums.
- Assume your staff are already using it. Shadow AI is not a hypothetical risk. If you have not provided approved alternatives with enterprise-grade data protection, your people are using consumer tools. The data is already out there.
- Build governance that evolves. A static AI policy written once and forgotten is worse than no policy at all. It gives false confidence. Build governance that includes review triggers, not just review dates.
- Test your AI systems. If AI is making decisions, recommending actions, or touching sensitive data in your organisation, it needs the same rigorous testing you would apply to any other system in your environment.
If you need help with any of the above, that is what we do.
Policy evolution
AI is moving faster than annual policy review cycles. We update this policy as our practices change, as the technology evolves, and as we learn more about the risks and appropriate controls.
We do not wait for a scheduled review date to make changes. When something changes, we update the policy. The current version will always be available at cliffside.com.au/ai-policy/.
If a change materially affects how we handle client data in connection with AI, we will notify affected clients directly.
Questions
If you have questions about how Cliffside uses AI, how your data is handled, or anything else in this policy, ask us. Directly.
Cliffside Cybersecurity Pty Ltd
Level 1, 66 King Street, Sydney NSW 2000
(02) 8916 6389
Contact us