Five questions every AO should be sitting with after this year's AO Forum

Cyber, culture, impersonation, smart-tech cheating and AI efficiency — what the day surfaced, and what it means for the rest of us

Published: 4/23/2026
Five questions every AO should be sitting with after this year's AO Forum

Most of what I heard at the AO Forum wasn't new to me. I've spent the last few years interviewing leaders from across our sector on the Test Community Network podcast, and the themes the day covered are ones I keep returning to in those conversations. What struck me, looking around the room, was how many of these topics were genuinely new, or at least newly urgent, to a lot of the people sitting in it. That, more than anything else, is why these gatherings matter. The shared moment of recognition is half the battle.

I want to focus on five themes in particular. Cybersecurity. Psychological safety. Impersonation as a service. Smart-tech cheating. And the often overlooked positive: the regulator's encouragement to use AI for internal efficiency. Each one carries a hard truth, but each one also got noticeably easier to talk about because we were doing it together.

Cybersecurity: the threat is no longer abstract

If anyone in the room still thought cyber was someone else's problem, the last few months should have changed their mind. NCFE was forced to take its IT systems offline in early December after detecting suspicious activity on its network. Just weeks ago, a cyberattack on the centralised C2K school IT network in Northern Ireland disrupted access to educational systems for hundreds of thousands of students. These are not theoretical case studies; this is happening now, and with the continuous developments in AI coding, it's not a question of if, it's when it will happen to you. Especially when the most powerful tools available code may have been leaked.

Lucy Sydney from Ofqual put numbers on the trend:

"Critically damaging cyber attacks in schools have risen from 6% in 2023 to 2024 to 10% in 2024 to 2025."
— Lucy Sydney, Ofqual

Her three-lens framing felt genuinely useful for AO leaders working out where to focus next: your own internal arrangements, your centre-facing systems (where dormant accounts and weak MFA are often the back door), and your third-party supply chain. The encouraging part was the candour around the room. The conversations I overheard about MFA, dormant accounts and incident response felt qualitatively different to the ones I was having two years ago. We're getting better at this, and we're getting better at it together.

A question worth sitting with: if your worst week looked like NCFE's or C2K's, what would your stakeholders, regulators and learners actually see?

Psychological safety: the unsung enabler

Topics like cyber security need to be discussed, and questions need to be asked. But does your culture support that process? Sam Double from VetSkill delivered an important session that exposed the importance of having a safe culture to speak up. Her point was that fear silently inhibits the things every AO claims to want: innovation, candid risk reporting, and honest debate. Staff who are afraid of looking incapable simply stop speaking up.

"Psychologically safe organisations are noisy. There is conflict, there is challenge."
— Sam Double, CEO, VetSkill

A quiet team isn't necessarily a healthy team; it might just be one managing impressions. Sam encouraged us to actively interrogate our near misses rather than bury them, and from the regulatory side it was confirmed that silence itself is an alarm bell. Ofqual would rather hear from an AO that has made a mistake than from one that has gone suspiciously quiet. The encouraging thing is that the Forum itself was a small example of what Sam was describing. People shared what wasn't working, asked uncomfortable questions, and walked out better for it. If you're not actively investigating incidents right now, then potentially you're not looking hard enough.

A question worth sitting with: if your last team meeting was quiet and harmonious, was that maturity, or was it impression management?

Impersonation as a service: the threat has been industrialised

Niamh Pierce from ASRG took us through what is no longer a fringe issue. Impersonation has been commercialised, and the bundle on offer is comprehensive.

"Not only are you getting the impersonator, you're getting the software to bypass proctoring, you're getting behavioural training, and how to appear in front of the camera. They also go in afterwards and wipe any evidence that it was there."
— Niamh Pierce, Head of Research, ASRG

The pricing tells its own story. Two to four thousand dollars for a remotely proctored exam, and up to twenty thousand for an in-person sit, currently advertised in Asia. Payment in cryptocurrency, communication on self-destructing Telegram messages. The positive here is that ASRG and similar groups are now sharing this intelligence openly with the sector. If you have an interest in this topic and others related to Test Security, then I recommend you look at attending the Conference on Test Security (COTS)

A question worth sitting with: if a £20,000 in-person impersonation service is a real product on a real website today, is your invigilation model designed for that threat, or for the one you faced ten years ago?

Smart tech for cheating: hiding in plain sight

Niamh's second theme moved the threat from digital to physical, and frankly, it was eye-opening. Magnetic earpieces inserted deep into the ear canal with specialist tools, paired with a conduction collar and a toe-controlled MP3 player in the shoe. Smart glasses indistinguishable from prescription frames, with teleprompters and live translation built in. Scientific calculators that connect to the internet at the press of a button. Translation pens marketed on Amazon as a dyslexia aid and on TikTok as a cheating tool, with the same product flogged to two very different audiences. A more indept list of methods is available here.

So much of our collective attention has gone to generative AI risk that the physical devices walking into examination rooms have evolved quietly in the background. The good news is that once you've seen them, you can train for them, and the appetite around the room to update invigilator briefings was obvious. If your front-line invigilators couldn't pick a magic calculator out of a line-up of regular ones, that is a fixable problem, and it should be fixed before the next exam window, not after it.

A question worth sitting with: when did your invigilator training last include a photo line-up of the actual devices candidates are bringing into the room?

AI for efficiency: the regulator's quiet green light

The most underreported message of the day, in my view, was also the most positive. Where AI supports internal efficiency for AOs, whether that's generating items, producing synthetic responses for examiner training, or creating stimulus materials, Ofqual is genuinely supportive.

"Where it supports efficiency, consistency and speed of delivery for you in your internal processes, we are absolutely supportive."
— Lucy Sydney, Ofqual

The hard line is narrow. AI cannot be the sole marker of centre-marked assessment. Beyond that, the runway is wide. For any AO that has been waiting for permission to start automating internally, this was it. The AOs that move first will set the operating-cost benchmark the rest of the sector is measured against, and that gap is going to widen quickly.

A question worth sitting with: if Ofqual has signalled support, what is actually stopping your AO from automating the parts of your operation that don't need a human in the loop?

Why days like this matter

I want to be clear about what I came away with. Yes, the challenges facing our sector are more complex than they were even two or three years ago. Cyber attacks are no longer hypothetical for awarding organisations or for the education systems we sit alongside. Impersonation has gone professional. Smart-tech cheating is hiding in everyday objects. Internal cultures will quietly determine which AOs adapt and which don't. None of that is comfortable reading.

But I left the Forum genuinely encouraged. The room was full of people willing to name those challenges, share what they're trying, and admit what they don't yet know. That is the opposite of the impression management Sam warned us about, and it is exactly the culture our sector needs. No single AO is going to solve cyber, culture, fraud, smart-tech cheating and AI adoption on its own, and the brilliant thing is that we don't have to. The real value of the AO Forum, and of the wider conversations we have through the Test Community Network, is the room itself. The candid exchanges over coffee. The shared near misses. The quiet recognition that the AO sat next to you is wrestling with the same five questions you are. If you weren't there this time, please come along to the next one. The challenges are bigger than any one organisation, but the answers are well within our collective reach.