Leadership teams do not lack care. They lack an operating model that forces clarity at the moments where risk gets locked in.
A credible security programme is not measured by how many controls you bought. It is measured by whether you can answer four questions quickly and with evidence:
- What risks are we accepting?
- Who owns each one?
- What evidence supports the decision?
- What is the plan and deadline to reduce it?
In most organisations, these questions cannot be answered. Not because people are negligent, but because the operating model does not require anyone to answer them at the points that matter. Procurement selects a vendor without a security scorecard. Design locks in insecure defaults because the security review happened too late. Go-live pressure creates permanent exceptions. Handover happens without tested monitoring. At each step, risk is accepted implicitly.
The consequences are real. In Australia, the ASD Annual Cyber Threat Report 2023-24 recorded over 1,100 cyber security incident responses, with a growing proportion linked to third-party compromises and configuration failures: exactly the kind of problems that an earlier governance intervention would catch. Under the Cyber Security Act 2024, mandatory incident reporting and ransomware payment disclosure mean that governance failures now have regulatory consequences, not just operational ones.
The hard truth: If evidence is missing, the risk is already being accepted. The only question is whether you can prove it was explicit, owned, time-bound, and funded.
A gate is a point in the lifecycle where a decision has long-term security consequences. You proceed when the minimum evidence exists, not when confidence feels high.
The Six Security Gates model targets the technology and vendor lifecycle, from initial selection through to steady-state operations. The gates are positioned at the points where risk becomes progressively more expensive to remediate:
- RFP and Selection: where vendor due diligence either happens or gets skipped
- Vendor Solution Context: where certifications get mistaken for coverage
- Contractual Obligations: where you either gain leverage or lose it for years
- Design and Integration: where insecure defaults get baked in
- Pre Go-Live: where schedule pressure creates permanent exceptions
- BAU Handover: where operational ownership either transfers cleanly or does not
This is not a maturity model or a compliance framework. It is a governance mechanism. The gates work because they force specific evidence and specific accountability at the exact points where risk decisions get made, consciously or not. The earlier the gate, the cheaper and faster it is to fix. By the time you reach Gate 5, your options are limited and your remediation costs are high.
Organisations already familiar with security governance frameworks will recognise the logic. The difference is that gates are concrete and enforceable. They produce audit-ready evidence, not policy documents that sit in SharePoint.
If you try to gate every initiative, you will fail. Leaders will treat it as bureaucracy and route around it.
The single most common failure in security governance is applying heavyweight controls to everything. The result is predictable: teams game the process, rubber-stamp approvals, and the governance mechanism loses credibility.
The solution is materiality. Define clear triggers that determine which initiatives require the full gate process. A practical starting point:
- Strategic vendor dependency: any initiative involving a new vendor relationship with annual spend above a defined threshold, or where the vendor becomes difficult to replace
- Customer-facing or internet-exposed systems: anything that expands the external attack surface
- Regulated or sensitive data: systems handling personal information, financial data, health records, or data subject to specific regulatory requirements such as APRA CPS 234 or the Privacy Act
- New external access, integrations, or hosting changes: any architecture change that introduces new trust boundaries or third-party connectivity
Everything below the materiality threshold still needs sensible controls, but it does not need the full gate process. Materiality is how you keep governance serious without becoming slow.
Warning: Do not let teams self-assess materiality without oversight. If the team delivering the initiative also decides whether it is material, you have created an obvious conflict of interest. The security function or a governance committee should validate materiality assessments.
Accountability cannot drift if every risk decision is traceable to an individual, not a committee.
Every material initiative needs a minimum set of named decision owners. The classic failure mode in security governance is diffused accountability: everyone assumes someone else has it covered, and when things go wrong, no individual can explain what was decided or why.
Go/no-go accountability. Typically the CIO, COO, or business unit head. This person signs off on proceeding past each gate, and their name is on the record when residual risk is accepted. This is not a ceremonial role; it creates personal accountability for the risk position.
Evidence quality and exceptions management. Responsible for confirming that the evidence pack at each gate meets minimum standards, that exceptions are properly documented with owners and remediation dates, and that residual risk statements are technically accurate.
Supplier controls and contract enforceability. Owns the contractual obligations gate. Ensures that security requirements are measurable (not "reasonable endeavours"), that remediation SLAs exist, that audit rights are preserved, and that incident cooperation terms are explicit.
Supportability and BAU readiness. Confirms that operational processes exist before handover: monitoring, alerting, escalation, patching cadence, runbooks, and on-call arrangements. Without this role, the gap between "project complete" and "operationally secure" becomes the source of future incidents.
For organisations with a virtual CISO arrangement, the security lead role maps naturally to the vCISO function. The key principle is that risk acceptance decisions must be traceable to individuals, not committees or generic role titles.
The minimum evidence, the common failure, and why each gate matters.
Each gate below specifies the minimum evidence required to proceed and the typical failure mode when the gate is skipped or treated as a formality.
- Scored security requirement set embedded in the RFP, not bolted on as an afterthought
- Shared responsibility matrix documenting what the vendor secures versus what remains your responsibility
- Fit checks for identity integration, logging, monitoring, and operational support model
- Plain-language residual risk statement completed before vendor selection
- Certification scope checks mapped against your exact deployment model and data flows
- Documented high-level architecture showing where your data sits and how it moves
- Known risks logged with a named owner, treatment plan, and remediation due date
- Measurable security obligations with defined standards, not "reasonable endeavours" or "industry best practice"
- Remediation SLAs for vulnerabilities by severity, with a defined reporting cadence
- Clear incident notification and cooperation terms, including timelines aligned with your regulatory obligations
- Audit rights and a right to conduct independent security testing
- Data handling clarity: residency, access controls, subprocessor disclosure, and exit provisions
- Security non-functional requirements that are owned, tracked, and testable
- Early design reviews with findings captured and remediation tracked
- Testable acceptance criteria for security controls
- A decision log and exceptions register, with each exception owned and time-bound
- Evidence that critical controls exist, are configured correctly, and have been independently tested
- Residual risk explicitly accepted in plain language, with a named executive owner
- Every open exception owned, time-bound, and funded for remediation
- Named operational ownership with documented responsibilities
- Monitoring, alerting, and escalation tested end-to-end, not just documented
- Enforced remediation cadence for patching, certificate rotation, and key management
- Handover treated as a gated deliverable, not an optional task at project close
Five ways organisations undermine their own security gates, and how to recognise each one.
Even organisations that adopt a gate model can undermine it through predictable anti-patterns. If any of these sound familiar, the governance mechanism is producing paperwork, not decisions.
- ✕ Vibes-based governance
Confidence replaces evidence. The executive sponsor "feels good" about the vendor, the team reports green status, and the gate is passed without reviewing the actual evidence pack. This is the most common anti-pattern and the hardest to detect because it looks like efficient decision-making.
- ✕ Retrospective exceptions
Risk is accepted after the decision has already been made. The system is live, the vendor is contracted, the architecture is deployed, and then someone raises the formal exception. The gate exists on paper, but the decision was made before the evidence was assessed.
- ✕ Orphaned risk registers
Risks are logged but never reviewed. The register grows. Exceptions age. No one is tracking remediation progress, and the register becomes a graveyard of good intentions rather than a decision tool. If the register is not driving action on a weekly or fortnightly cadence, it is not governance.
- ✕ Security theatre gates
The gate process exists, but it is a rubber stamp. Evidence packs are not challenged. The security lead signs off because blocking a project is politically costly. The gate produces paperwork and the illusion of governance without actually changing any decision.
- ✕ Over-engineering for everything
Every initiative, regardless of risk, goes through the full gate process. The result is a governance backlog, frustrated delivery teams, and a security function that is seen as an obstacle. Without clear materiality thresholds, the gates lose credibility and teams find ways to route around them.
You do not need a 12-month transformation. You need two pilots and a weekly cadence.
Security gates fail when they are treated as a documentation exercise. They succeed when they are piloted on real work, refined based on friction, and embedded into existing delivery processes.
Create the gate policy document. Set materiality thresholds with input from security, procurement, and the executive team. Name decision owners for each role. Build minimum viable templates: RFP security scorecard, risk memo, exceptions register, and BAU handover pack. Do not over-engineer the templates; they will evolve during the pilot.
Run Gates 1 to 3 on a procurement pilot and Gates 4 to 6 on a delivery pilot. Use real initiatives, not hypothetical scenarios. Train procurement, PMO, legal, operations, and architecture teams on the gate process. Start a weekly gate cadence to prevent bottlenecks. Document what works, what creates friction, and what needs to change.
Embed gates into PMO stage gates so they are part of the standard delivery methodology, not a parallel process. Produce a monthly board pack showing decisions made, residual risk accepted, and exception ageing by owner. Run assurance sampling on evidence packs. Test decision-making under pressure with a tabletop exercise scenario where gate evidence is incomplete and the business is pushing for go-live.
If a metric cannot drive a decision, remove it.
Security gate metrics should tell leadership two things: whether the mechanism is being used, and whether it is producing defensible decisions. Vanity metrics do not protect leaders. Evidence does.
- %Gate adoption rate
Percentage of material initiatives that completed all applicable gates. Target 100% for material initiatives within the first two quarters. Anything below 80% means the materiality thresholds or the process itself need adjustment.
- ⏱Exception ageing by owner
Average age of open exceptions, broken down by named owner. This is the single most revealing metric. If exceptions are ageing beyond their committed remediation dates, the governance mechanism is producing paperwork, not outcomes.
- →Decision cycle time
Time from issue raised to decision made. This measures whether the gates are creating bottlenecks. A well-run gate cadence should resolve most decisions within one to two weeks. Longer cycle times indicate process friction or unclear ownership.
- ⚠Residual risk trend
Count of explicitly accepted residual risks, tracked over time. A rising trend is not necessarily bad; it may mean the organisation is becoming more honest about its risk position. But unmanaged growth signals that risk acceptance is becoming a default rather than a conscious decision.
- 🔍Incident traceability
Percentage of incidents that can be traced to a delivery or handover gap. This is the lagging indicator that validates whether the gates are catching the right risks. If incidents keep recurring in areas that gates should have caught, the evidence requirements need strengthening.
If you want this model to stick, treat it like an operating model, not a document.
We implement security governance models for Australian organisations. Not as a policy-writing exercise, but as an operating model that changes how risk decisions actually get made. We have seen what works, what gets gamed, and what leadership teams will actually sustain beyond the first quarter.
Our approach starts with the Lighthouse Assessment: an honest evaluation of your current governance maturity, your risk decision processes, and the gaps between what your policies say and what actually happens when delivery pressure hits.
We will tell you honestly:
- →Where your current risk decisions are being made implicitly, and what that means for defensibility
- →Which governance mechanisms are working and which are producing paperwork without decisions
- →How to set materiality thresholds that keep governance serious without creating bottlenecks
- →What a realistic 90-day implementation looks like for your size and structure
- →How to build a board reporting pack that demonstrates governance quality, not just activity
- →Whether your vendor and procurement processes have the right controls for your risk profile
If the honest answer is "your current governance is producing paperwork, not decisions," we will say that. If the answer is "you need a risk management framework before you need gates," we will say that too.