The AI Marking Spectrum: A Conversation with Excelsoft

Understanding when and how to apply different AI approaches across assessment scenarios

Published: 10/27/2025
The AI Marking Spectrum: A Conversation with Excelsoft

The conversation around artificial intelligence in education assessment has become increasingly polarised. On one side, there's breathless enthusiasm about AI as the solution to marking challenges. On the other, there's deep concern about regulatory compliance and the integrity of high-stakes assessments. The reality, as with most emerging technologies, lies somewhere in between – and requires a more nuanced understanding of what we actually mean by "AI".

The AI Spectrum: It's Not Just About Generative AI

When most people think about AI in marking today, they immediately think of generative AI tools like ChatGPT being asked to mark examination scripts. This narrow focus misses the broader picture. The spectrum of AI technologies available spans from simple rules-based systems through to sophisticated large language models, and understanding this spectrum is crucial for awarding organisations exploring how AI might support their work.

At one end, we have rules-based multiple-choice marking – which isn't really AI at all, but automated scoring based on predetermined correct answers. Moving along the spectrum, we encounter supervised machine learning with more complex scoring engines trained on previously marked scripts, deep neural networks with multi-layered analysis capabilities, and fine-tuned large language models adapted specifically for assessment contexts. Only at the far end do we find the generic generative AI tools that dominate current headlines.

This spectrum matters because different technologies bring different capabilities, risks, and regulatory implications. More importantly, it opens up possibilities for AI adoption that don't require awarding organisations to leap straight to the most controversial applications.

AI as Augmentation, Not Replacement

The critical principle that should guide any exploration of AI in marking is this: AI should augment human expertise, not replace it. The human must remain firmly in the loop, with AI serving to nudge, support, and enhance the marking process rather than lead it.

This isn't just a regulatory requirement or a philosophical stance – it's a practical recognition of what AI can and cannot reliably do. AI systems, regardless of their sophistication, can hallucinate, and lack the contextual understanding that experienced markers bring to assessment judgements. A marker's professional expertise, understanding of qualification standards, and ability to interpret nuanced responses remain irreplaceable.

However, this doesn't mean AI has no role to play. Consider the current reality of assessment marking: even in developed economies, approximately 95% of examinations still happen on paper. These handwritten scripts must be securely transported, scanned, digitised, and distributed to markers – a process involving significant time, cost, and logistical complexity. Markers themselves are human, subject to fatigue, mood variations, and the natural inconsistencies that come with processing hundreds of scripts. Despite best efforts with standardisation and moderation, marking accuracy varies across a marker's day and across different markers.

Practical Entry Points for AI Adoption

Rather than focusing solely on whether AI can mark entire scripts, awarding organisations should explore how AI can support the broader marking ecosystem. Several practical applications offer lower-risk opportunities for learning and experimentation:

Optical Character Recognition (OCR) and Script Processing: AI-powered OCR can digitise handwritten responses far more effectively than traditional scanning, making scripts more accessible to markers and enabling better workflow management. AI can also support script sorting, identifying whether responses have been routed correctly, and flagging potential issues before human marking begins.

Marker Training and Standardisation: AI systems trained on exemplar responses can serve as training tools for new markers or as refreshers for experienced ones. By analysing marking patterns, AI can identify when a marker's scoring appears to be an outlier and prompt them to reconsider – not overriding their judgement, but nudging them towards greater consistency with agreed standards.

Quality Assurance and Sampling: Where awarding organisations sample marking for quality assurance purposes, AI can work in parallel with human markers to identify scripts that warrant closer scrutiny. This doesn't replace human verification but makes the sampling process more intelligent and targeted, potentially identifying issues that might otherwise be missed.

Understanding the Trade-offs

Different AI approaches come with distinct advantages and limitations. Generic large language models are powerful but untrained in assessment-specific contexts and prone to hallucinations. Fine-tuned models adapted for assessment domains can perform better but require significant training data, expertise, and computational resources. More traditional machine learning approaches may be less flexible but offer greater interpretability and control.

The choice of technology depends on your specific use case, risk tolerance, and available resources. Critically, regardless of which approach you choose, robust human oversight and verification mechanisms must be built into the process from the start.

The Path Forward: Explore to Learn

The key message for awarding organisations is this: thinking "AI-forward" doesn't mean rushing to implement AI marking systems. It means actively exploring where AI might add value to your marking processes, understanding the technology landscape, and learning through controlled experimentation.

This might mean running AI analysis in parallel with human marking to understand correlation and identify anomalies. It might mean using AI for specific supporting tasks like OCR or quality flagging. It might mean investing time in understanding different AI technologies and their implications for your context.

What it shouldn't mean is standing still whilst the assessment landscape evolves around you. The technology will continue to develop, regulatory frameworks will adapt, and organisations that have invested in understanding AI's potential – and its limitations – will be better positioned to make informed decisions.

The conversation about AI in marking isn't binary. It's not "AI or human" but rather "how can AI support humans to mark more efficiently, consistently, and reliably?" That's a question worth exploring, with appropriate caution and purpose, rather than dismissing or delaying. The organisations that learn fastest will be those that start exploring now.

Your Presenters

Tim Burnett will act as your “translator,” framing complex AI concepts in accessible terms and ensuring the conversation addresses the questions assessment professionals are really asking.

Adarsh Sudhindra brings deep technical expertise as a technologist, innovator, and entrepreneur with extensive experience in the e-learning industry. A TEDx speaker and graduate of UIUC Computer Science and Kellogg School of Business, Adarsh will share practical insights from Excelsoft’s AI marking solutions and their real-world implementations.

Vishwanath Subbanna is a seasoned technology and product leader with extensive experience in research initiatives, global product delivery, and EdTech innovation. As a Principal Consultant at Excelsoft, he plays a pivotal role in aligning customer requirements with product capabilities. Drawing on his background in solutioning, product innovation, and project management, Vishwanath helps organisations achieve future-ready, best-fit solutions in learning and assessment—and serves as a trusted advisor in the evolving EdTech landscape.


The Test Community Network brings together assessment professionals through expert-led events, insightful discussions, and connections with the latest technologies and services. Join our community to stay ahead of the curve and shape the future of assessments.