The Problem With How Most Organisations Run Phishing Simulations
Phishing remains the most common initial access vector for cyber attacks. The Verizon 2024 Data Breach Investigations Report found that 68% of breaches involved a human element, with phishing and pretexting accounting for a significant proportion. ASD's Annual Cyber Threat Report consistently identifies phishing as a primary concern for Australian organisations.
Most organisations know this. The response is usually a phishing simulation programme. But the way most programmes are designed and measured guarantees they will underperform.
The typical approach looks like this: run a quarterly simulation, measure the click rate, send the results to the CISO, schedule some generic training, repeat. The click rate drops initially because people learn to recognise that specific template. Then the programme stalls, the board sees a low click rate and assumes the problem is solved, and the organisation's actual susceptibility to real phishing remains largely unchanged.
The programme we ran took a different approach. Here is what happened.
What 12 Months of Data Showed Us
We ran a continuous phishing simulation programme for 12 months with a mid-sized Australian organisation. Monthly campaigns, progressive difficulty, immediate targeted training for anyone who clicked, and detailed per-department tracking. The platform was KnowBe4, the same platform we use for our Awareness as a Service managed programme.
Month one: 25% click rate, 3% reporting rate
The first campaign used a moderate-difficulty template: a supplier invoice notification with a plausible sender domain and a credential harvesting link. One in four employees clicked. That number, while concerning, was within the range we typically see for untrained populations. Industry baseline data from KnowBe4's annual benchmarking report puts the average phish-prone percentage for mid-sized organisations at around 34%.
The more telling number was the reporting rate. Only 3% of employees used the phish alert button to report the simulation as suspicious. That means for every person who clicked, roughly eight others saw the email, did not click, but also did not report it. In a real attack, those silent non-reporters represent a massive gap. They might not have fallen for this particular email, but they are not contributing to the organisation's collective defence.
Months two through six: click rate dropped to 5%, reporting rate climbed to 45%
With monthly campaigns and immediate training triggered by each failure, the click rate fell steadily. By month three it was at 8%. By month six, consistently below 5%. More importantly, the reporting rate climbed from 3% to 45%. That shift matters far more than the click rate decline.
Here is why: in a real phishing attack, the organisation does not need every employee to be individually impervious. It needs enough employees to recognise and report the attack quickly so the security team can respond before the attacker achieves their objective. A reporting rate above 40% means the security team is likely to receive an alert within minutes of a phishing campaign hitting inboxes, even if a handful of people click.
Months seven through twelve: difficulty increased to maximum, click rate stayed below 5%
At the six-month mark, the client asked us to increase the difficulty. We moved to level five: highly targeted spear phishing using internal naming conventions, mimicking real business processes, and referencing current projects. These templates were designed to be genuinely difficult to distinguish from legitimate business email.
The click rate held below 5%. More significantly, the reporting rate continued to climb, reaching above 60% by month twelve. The workforce was not just resisting phishing. They were actively hunting for it.
Five Lessons That Changed How We Design Programmes
1. Click rate is the wrong primary metric
Every phishing simulation vendor reports click rates prominently because they are easy to measure and easy to present to a board. But a low click rate can mask a dangerous reality: people who do not click but also do not report are not contributing to your defence.
The metrics that matter more:
- Reporting rate. What percentage of recipients actively flagged the simulation as suspicious? This measures whether your people are engaged defenders or passive bystanders.
- Time to first report. How quickly does the first report arrive after the simulation is sent? In a real attack, speed of detection determines the window an attacker has to operate.
- Repeat clickers. What percentage of people who clicked in one campaign clicked again in the next? This identifies individuals who need more intensive, possibly one-on-one, intervention.
- Department-level variance. Which teams are consistently more susceptible? This reveals where operational pressures, such as high email volume, time pressure, or frequent supplier interactions, create structural vulnerability.
2. Progressive difficulty is non-negotiable
Starting with high-difficulty simulations wastes your baseline data. If you send a sophisticated spear phishing email in month one and 60% of people click, you have not learned anything useful because even security-aware employees might fall for a well-crafted targeted attack. You have no way to distinguish between people who lack basic awareness and people who were simply outmatched by a sophisticated lure.
Start at a level that establishes a genuine baseline: commodity phishing with recognisable indicators. Increase difficulty incrementally as the organisation's resilience improves. The progression should mirror how real attackers escalate: from broad opportunistic campaigns to targeted, research-driven attacks.
The programme we ran used a five-level difficulty scale:
- Level 1: Generic phishing with obvious red flags (misspelled domains, generic greetings, poor formatting).
- Level 2: Branded phishing mimicking known services (Microsoft 365, Australia Post, ATO notifications).
- Level 3: Business-context phishing using the organisation's industry terminology and plausible business scenarios.
- Level 4: Spear phishing using real employee names, department structures, and current business events.
- Level 5: Highly targeted attacks mimicking internal processes, referencing real projects, and using spoofed internal domains.
3. Training must be immediate and specific
Annual security awareness training produces compliance certificates, not behaviour change. Research on learning retention, including studies referenced by the Australian Cyber Security Centre, shows that generic training delivered weeks after a failure has minimal impact on future behaviour.
What works is immediate, context-specific feedback. When someone clicks a simulated phishing link, they should receive training within seconds that explains exactly what they missed: the sender domain anomaly, the urgency language, the mismatched URL. The training should take two to three minutes, not thirty. And it should be specific to the lure type they fell for, not a generic cybersecurity overview.
This principle, training at the moment of failure on the specific skill that was lacking, is what drives sustained behaviour change. It is also why effective phishing simulation programmes require a platform that integrates simulation delivery with training delivery, which is how we structure our managed awareness programme.
4. Department-level data reveals structural risk, not just individual weakness
One of the most valuable outputs from the 12-month programme was per-department analysis. Finance and accounts payable teams had consistently higher click rates on invoice and payment-related lures. That is not because those teams are less security-aware. It is because their daily work involves processing exactly the kind of requests that phishing emails mimic.
This is a structural risk, not an awareness gap. The response should not be more training for finance. It should be stronger technical controls around payment processes, dual-authorisation for payment changes, out-of-band verification procedures, and email authentication controls like DMARC to reduce the chance of supplier impersonation reaching inboxes in the first place.
Good phishing simulation data does not just tell you who clicked. It tells you where your business processes create vulnerability that training alone cannot fix. That insight feeds directly into your broader security architecture and control design.
5. Punitive approaches destroy reporting culture
Some organisations name and shame employees who click. Others impose consequences: mandatory retraining, manager notifications, or performance impacts. This approach is counterproductive and the data shows why.
In organisations that punish simulation failures, reporting rates consistently stay low. People who are afraid of consequences do not report suspicious emails because reporting draws attention to the fact that they interacted with the email. They delete it quietly and hope no one notices. In a real attack, that silence is catastrophic.
The programme we ran used a deliberately supportive approach: no naming, no shaming, no manager escalation for individual clicks. Training was framed as skill development, not remediation. The result was a reporting culture where employees actively competed to be the first to flag a simulation, which is exactly the behaviour you want when a real attack arrives.
What a Good Programme Looks Like
Based on this programme and the broader data from our social engineering assessment work, here is what separates effective phishing simulation programmes from compliance exercises.
- Continuous, not periodic. Monthly simulations at minimum. Quarterly is too infrequent to build muscle memory or detect trends. Research shows susceptibility returns to baseline within four to six months of a training event.
- Progressive difficulty. Start at commodity level and increase. The progression should be documented and tied to the organisation's measured improvement.
- Immediate, targeted training. Delivered within seconds of a click, specific to the lure type, and taking no more than a few minutes. Not a 30-minute module assigned two weeks later.
- Reporting-focused measurement. Track reporting rate alongside click rate. A 5% click rate with a 10% reporting rate is worse than a 10% click rate with a 60% reporting rate.
- Department-level analysis. Identify structural risk patterns that training alone cannot address. Feed this data into control design and process improvement.
- No punitive consequences. Frame the programme as skill development. Protect psychological safety. Reward reporting behaviour.
- Varied lure types. Rotate across email phishing, SMS (smishing), and voice phishing (vishing) if your programme scope includes multi-channel testing. Real attackers do not limit themselves to email.
- Board-level reporting. Provide quarterly reports that show trends, not just snapshots. The board should see improvement trajectories, department-level risk heat maps, and reporting culture metrics alongside the headline click rate.
The Australian Context
Several Australian regulatory and standards frameworks either require or strongly recommend phishing simulation and security awareness programmes.
ISO 27001 Annex A control 6.3 requires information security awareness, education, and training. A well-documented phishing simulation programme with measured outcomes is one of the strongest forms of evidence for this control during certification audits.
APRA CPS 234 paragraph 18 requires APRA-regulated entities to maintain an information security capability commensurate with the size and extent of threats. For financial services organisations, this increasingly means demonstrating an active awareness programme, not just annual training completion records.
The ASD Essential Eight does not include a dedicated awareness control, but the broader Information Security Manual (ISM) includes extensive personnel security guidance. Organisations pursuing Essential Eight maturity should consider phishing simulation as a complementary programme that reduces the human risk surface alongside the technical controls.
The ACSC's guidance for Australian businesses consistently emphasises staff awareness as a foundational security measure. For organisations in regulated industries, the ability to demonstrate measurable improvement in phishing resilience over time is becoming a baseline expectation from auditors and regulators.
What to Do Next
If your organisation runs phishing simulations quarterly or annually, you are almost certainly not getting the value you could. If you measure click rates but not reporting rates, you are measuring the wrong thing. And if your programme has not changed difficulty level in the past year, your data has gone stale.
The organisations that materially reduce their human risk surface treat phishing simulation as an ongoing programme, not a project. They invest in continuous testing, immediate training, and metrics that reflect genuine culture change rather than compliance completion.
Cliffside runs phishing simulation and security awareness programmes as a managed service for Australian organisations. If you want an honest assessment of your current programme's effectiveness, or you want to design one that actually changes behaviour, start with a Lighthouse Assessment.