Skip to main content
Strategy & Governance · Risk Decision Model

Six Security Gates:
the governance model
that makes cyber risk
explicit.

Most organisations do not choose to accept cyber risk. They accept it by default, during procurement, during delivery, during go-live, and during handover. Then, when something goes wrong, leadership has to defend a decision they never consciously made. The problem is not a lack of effort or intent. It is a lack of mechanism.

The Six Security Gates model solves this by embedding enforceable decision points at each stage of the technology lifecycle where risk becomes expensive to change. Each gate requires minimum evidence, named owners, and explicit risk acceptance before an initiative proceeds. The result is governance that holds up under regulator, insurer, and board scrutiny.

Written by security governance practitioners who design and implement risk operating models for Australian organisations across financial services, government, and critical infrastructure.

01 / The implicit risk trap

Leadership teams do not lack care. They lack an operating model that forces clarity at the moments where risk gets locked in.

A credible security programme is not measured by how many controls you bought. It is measured by whether you can answer four questions quickly and with evidence:

  • What risks are we accepting?
  • Who owns each one?
  • What evidence supports the decision?
  • What is the plan and deadline to reduce it?

In most organisations, these questions cannot be answered. Not because people are negligent, but because the operating model does not require anyone to answer them at the points that matter. Procurement selects a vendor without a security scorecard. Design locks in insecure defaults because the security review happened too late. Go-live pressure creates permanent exceptions. Handover happens without tested monitoring. At each step, risk is accepted implicitly.

The consequences are real. In Australia, the ASD Annual Cyber Threat Report 2023-24 recorded over 1,100 cyber security incident responses, with a growing proportion linked to third-party compromises and configuration failures: exactly the kind of problems that an earlier governance intervention would catch. Under the Cyber Security Act 2024, mandatory incident reporting and ransomware payment disclosure mean that governance failures now have regulatory consequences, not just operational ones.

The hard truth: If evidence is missing, the risk is already being accepted. The only question is whether you can prove it was explicit, owned, time-bound, and funded.

02 / What the Six Security Gates model is

A gate is a point in the lifecycle where a decision has long-term security consequences. You proceed when the minimum evidence exists, not when confidence feels high.

The Six Security Gates model targets the technology and vendor lifecycle, from initial selection through to steady-state operations. The gates are positioned at the points where risk becomes progressively more expensive to remediate:

  1. RFP and Selection: where vendor due diligence either happens or gets skipped
  2. Vendor Solution Context: where certifications get mistaken for coverage
  3. Contractual Obligations: where you either gain leverage or lose it for years
  4. Design and Integration: where insecure defaults get baked in
  5. Pre Go-Live: where schedule pressure creates permanent exceptions
  6. BAU Handover: where operational ownership either transfers cleanly or does not

This is not a maturity model or a compliance framework. It is a governance mechanism. The gates work because they force specific evidence and specific accountability at the exact points where risk decisions get made, consciously or not. The earlier the gate, the cheaper and faster it is to fix. By the time you reach Gate 5, your options are limited and your remediation costs are high.

Organisations already familiar with security governance frameworks will recognise the logic. The difference is that gates are concrete and enforceable. They produce audit-ready evidence, not policy documents that sit in SharePoint.

03 / Define materiality first

If you try to gate every initiative, you will fail. Leaders will treat it as bureaucracy and route around it.

The single most common failure in security governance is applying heavyweight controls to everything. The result is predictable: teams game the process, rubber-stamp approvals, and the governance mechanism loses credibility.

The solution is materiality. Define clear triggers that determine which initiatives require the full gate process. A practical starting point:

  • Strategic vendor dependency: any initiative involving a new vendor relationship with annual spend above a defined threshold, or where the vendor becomes difficult to replace
  • Customer-facing or internet-exposed systems: anything that expands the external attack surface
  • Regulated or sensitive data: systems handling personal information, financial data, health records, or data subject to specific regulatory requirements such as APRA CPS 234 or the Privacy Act
  • New external access, integrations, or hosting changes: any architecture change that introduces new trust boundaries or third-party connectivity

Everything below the materiality threshold still needs sensible controls, but it does not need the full gate process. Materiality is how you keep governance serious without becoming slow.

Warning: Do not let teams self-assess materiality without oversight. If the team delivering the initiative also decides whether it is material, you have created an obvious conflict of interest. The security function or a governance committee should validate materiality assessments.

04 / Named decision owners

Accountability cannot drift if every risk decision is traceable to an individual, not a committee.

Every material initiative needs a minimum set of named decision owners. The classic failure mode in security governance is diffused accountability: everyone assumes someone else has it covered, and when things go wrong, no individual can explain what was decided or why.

Executive Sponsor

Go/no-go accountability. Typically the CIO, COO, or business unit head. This person signs off on proceeding past each gate, and their name is on the record when residual risk is accepted. This is not a ceremonial role; it creates personal accountability for the risk position.

Security Lead

Evidence quality and exceptions management. Responsible for confirming that the evidence pack at each gate meets minimum standards, that exceptions are properly documented with owners and remediation dates, and that residual risk statements are technically accurate.

Procurement & Legal

Supplier controls and contract enforceability. Owns the contractual obligations gate. Ensures that security requirements are measurable (not "reasonable endeavours"), that remediation SLAs exist, that audit rights are preserved, and that incident cooperation terms are explicit.

Operations Owner

Supportability and BAU readiness. Confirms that operational processes exist before handover: monitoring, alerting, escalation, patching cadence, runbooks, and on-call arrangements. Without this role, the gap between "project complete" and "operationally secure" becomes the source of future incidents.

For organisations with a virtual CISO arrangement, the security lead role maps naturally to the vCISO function. The key principle is that risk acceptance decisions must be traceable to individuals, not committees or generic role titles.

05 / What each gate demands

The minimum evidence, the common failure, and why each gate matters.

Each gate below specifies the minimum evidence required to proceed and the typical failure mode when the gate is skipped or treated as a formality.

Gate 1
RFP & Selection
This is where "we bought a brand name" becomes a substitute for due diligence. Most procurement processes either omit security requirements entirely or add them as a last-minute appendix that no one scores.
Minimum evidence
  • Scored security requirement set embedded in the RFP, not bolted on as an afterthought
  • Shared responsibility matrix documenting what the vendor secures versus what remains your responsibility
  • Fit checks for identity integration, logging, monitoring, and operational support model
  • Plain-language residual risk statement completed before vendor selection
Common failure
Security requirements arrive after the vendor is already selected, reducing them to a negotiation exercise rather than a selection criterion. By this point, switching cost and sunk effort make it politically difficult to walk away from a poor security fit.
Gate 2
Vendor Solution Context
This is where you stop assuming that a SOC 2 report or ISO 27001 certificate means the vendor is secure for your specific use case. Certifications describe what a vendor does in general. They do not describe what the vendor does for your deployment, your data, your integration points.
Minimum evidence
  • Certification scope checks mapped against your exact deployment model and data flows
  • Documented high-level architecture showing where your data sits and how it moves
  • Known risks logged with a named owner, treatment plan, and remediation due date
Common failure
The vendor presents a SOC 2 Type II report. The organisation accepts it without checking whether the scope covers the services being consumed. Critical gaps in logging, encryption at rest, or data residency go undetected until an incident or audit surfaces them.
Gate 3
Contractual Obligations
This is where you either gain leverage or lose it for years. Once the contract is signed, your ability to compel security outcomes from the vendor drops sharply. Every security obligation that is not in the contract is a request, not a requirement.
Minimum evidence
  • Measurable security obligations with defined standards, not "reasonable endeavours" or "industry best practice"
  • Remediation SLAs for vulnerabilities by severity, with a defined reporting cadence
  • Clear incident notification and cooperation terms, including timelines aligned with your regulatory obligations
  • Audit rights and a right to conduct independent security testing
  • Data handling clarity: residency, access controls, subprocessor disclosure, and exit provisions
Common failure
The contract contains vague security language that sounded acceptable during negotiation but provides no enforcement mechanism when a vulnerability is disclosed and the vendor's remediation timeline is measured in quarters, not days. Organisations subject to CPS 234 or the SOCI Act find they cannot meet their own regulatory obligations because the vendor contract does not compel timely action.
Gate 4
Design & Integration
This is where insecure defaults get baked into the architecture and become "too hard to change" by go-live. Design-time decisions about authentication flows, network segmentation, logging granularity, and encryption scope determine the security ceiling of the solution for its entire operational life.
Minimum evidence
  • Security non-functional requirements that are owned, tracked, and testable
  • Early design reviews with findings captured and remediation tracked
  • Testable acceptance criteria for security controls
  • A decision log and exceptions register, with each exception owned and time-bound
Common failure
Security reviews happen at the end of design, not during it. By the time findings are raised, the architecture is locked, the development is underway, and remediation requires rework that nobody has budgeted for. Findings get deferred to "post go-live" and never addressed. A proper security architecture review during design catches these issues when they are still cheap to fix.
Gate 5
Pre Go-Live
This is where go-live pressure creates permanent exceptions. Every organisation has experienced the conversation: "we know there are open findings, but the business needs this live by Friday." The result is residual risk that is accepted under duress, without proper documentation, and with no funded plan to remediate.
Minimum evidence
  • Evidence that critical controls exist, are configured correctly, and have been independently tested
  • Residual risk explicitly accepted in plain language, with a named executive owner
  • Every open exception owned, time-bound, and funded for remediation
Common failure
A penetration test is conducted two days before go-live. Critical findings are raised. The test report is filed, but the system launches on schedule because the commercial deadline takes priority. The findings age in a register that nobody reviews. The exception becomes permanent.
Gate 6
BAU Handover
This is where many "security incidents" are really operational failures. Patching gaps, missing runbooks, unclear ownership, and untested alerting. The project team moves on, the operations team inherits a system they were not involved in designing, and the gap between "project complete" and "operationally secure" becomes the attack surface.
Minimum evidence
  • Named operational ownership with documented responsibilities
  • Monitoring, alerting, and escalation tested end-to-end, not just documented
  • Enforced remediation cadence for patching, certificate rotation, and key management
  • Handover treated as a gated deliverable, not an optional task at project close
Common failure
The project closes. The operations team receives a handover document they had no input into. Monitoring dashboards exist but alerting thresholds are wrong. The patching process is undocumented. Six months later, a vulnerability is exploited because the system fell through the cracks of the organisation's standard audit and remediation processes.
06 / Governance anti-patterns

Five ways organisations undermine their own security gates, and how to recognise each one.

Even organisations that adopt a gate model can undermine it through predictable anti-patterns. If any of these sound familiar, the governance mechanism is producing paperwork, not decisions.

  • Vibes-based governance

    Confidence replaces evidence. The executive sponsor "feels good" about the vendor, the team reports green status, and the gate is passed without reviewing the actual evidence pack. This is the most common anti-pattern and the hardest to detect because it looks like efficient decision-making.

  • Retrospective exceptions

    Risk is accepted after the decision has already been made. The system is live, the vendor is contracted, the architecture is deployed, and then someone raises the formal exception. The gate exists on paper, but the decision was made before the evidence was assessed.

  • Orphaned risk registers

    Risks are logged but never reviewed. The register grows. Exceptions age. No one is tracking remediation progress, and the register becomes a graveyard of good intentions rather than a decision tool. If the register is not driving action on a weekly or fortnightly cadence, it is not governance.

  • Security theatre gates

    The gate process exists, but it is a rubber stamp. Evidence packs are not challenged. The security lead signs off because blocking a project is politically costly. The gate produces paperwork and the illusion of governance without actually changing any decision.

  • Over-engineering for everything

    Every initiative, regardless of risk, goes through the full gate process. The result is a governance backlog, frustrated delivery teams, and a security function that is seen as an obstacle. Without clear materiality thresholds, the gates lose credibility and teams find ways to route around them.

07 / A 30-60-90 day rollout

You do not need a 12-month transformation. You need two pilots and a weekly cadence.

Security gates fail when they are treated as a documentation exercise. They succeed when they are piloted on real work, refined based on friction, and embedded into existing delivery processes.

Days 0-30
Build the mechanism

Create the gate policy document. Set materiality thresholds with input from security, procurement, and the executive team. Name decision owners for each role. Build minimum viable templates: RFP security scorecard, risk memo, exceptions register, and BAU handover pack. Do not over-engineer the templates; they will evolve during the pilot.

Days 31-60
Pilot on real work

Run Gates 1 to 3 on a procurement pilot and Gates 4 to 6 on a delivery pilot. Use real initiatives, not hypothetical scenarios. Train procurement, PMO, legal, operations, and architecture teams on the gate process. Start a weekly gate cadence to prevent bottlenecks. Document what works, what creates friction, and what needs to change.

Days 61-90
Scale and lock it in

Embed gates into PMO stage gates so they are part of the standard delivery methodology, not a parallel process. Produce a monthly board pack showing decisions made, residual risk accepted, and exception ageing by owner. Run assurance sampling on evidence packs. Test decision-making under pressure with a tabletop exercise scenario where gate evidence is incomplete and the business is pushing for go-live.

08 / What to measure

If a metric cannot drive a decision, remove it.

Security gate metrics should tell leadership two things: whether the mechanism is being used, and whether it is producing defensible decisions. Vanity metrics do not protect leaders. Evidence does.

  • %
    Gate adoption rate

    Percentage of material initiatives that completed all applicable gates. Target 100% for material initiatives within the first two quarters. Anything below 80% means the materiality thresholds or the process itself need adjustment.

  • Exception ageing by owner

    Average age of open exceptions, broken down by named owner. This is the single most revealing metric. If exceptions are ageing beyond their committed remediation dates, the governance mechanism is producing paperwork, not outcomes.

  • Decision cycle time

    Time from issue raised to decision made. This measures whether the gates are creating bottlenecks. A well-run gate cadence should resolve most decisions within one to two weeks. Longer cycle times indicate process friction or unclear ownership.

  • Residual risk trend

    Count of explicitly accepted residual risks, tracked over time. A rising trend is not necessarily bad; it may mean the organisation is becoming more honest about its risk position. But unmanaged growth signals that risk acceptance is becoming a default rather than a conscious decision.

  • 🔍
    Incident traceability

    Percentage of incidents that can be traced to a delivery or handover gap. This is the lagging indicator that validates whether the gates are catching the right risks. If incidents keep recurring in areas that gates should have caught, the evidence requirements need strengthening.

09 / How we help

If you want this model to stick, treat it like an operating model, not a document.

We implement security governance models for Australian organisations. Not as a policy-writing exercise, but as an operating model that changes how risk decisions actually get made. We have seen what works, what gets gamed, and what leadership teams will actually sustain beyond the first quarter.

Our approach starts with the Lighthouse Assessment: an honest evaluation of your current governance maturity, your risk decision processes, and the gaps between what your policies say and what actually happens when delivery pressure hits.

We will tell you honestly:

  • Where your current risk decisions are being made implicitly, and what that means for defensibility
  • Which governance mechanisms are working and which are producing paperwork without decisions
  • How to set materiality thresholds that keep governance serious without creating bottlenecks
  • What a realistic 90-day implementation looks like for your size and structure
  • How to build a board reporting pack that demonstrates governance quality, not just activity
  • Whether your vendor and procurement processes have the right controls for your risk profile

If the honest answer is "your current governance is producing paperwork, not decisions," we will say that. If the answer is "you need a risk management framework before you need gates," we will say that too.

Security governance

Know where
your risk
decisions actually
get made.

The Cliffside Lighthouse Assessment gives you an honest picture of your security governance maturity, including where risk is being accepted implicitly, which governance mechanisms are producing decisions versus paperwork, and what a realistic improvement path looks like for your organisation.

What you get from the Lighthouse Assessment
  • Honest evaluation of your current security governance maturity
  • Gap analysis between policy documentation and actual risk decision practices
  • Materiality threshold recommendations for your organisation's risk profile
  • 90-day implementation roadmap tailored to your size and structure
  • Transferable report, yours to use with any provider