Test Community Network

Using AI to Create Assessment Content

This collection brings together a diverse range of research papers, studies, and reports exploring the rapidly evolving landscape of artificial intelligence in assessment authoring and item generation. As educational institutions and awarding organisations worldwide grapple with the opportunities and challenges presented by AI technologies, these resources provide valuable insights into both the practical applications and critical considerations of using AI tools to create examination questions, multiple-choice items, and other assessment materials. The collection encompasses empirical studies comparing AI-generated content with human-authored assessments across various educational contexts—from New Zealand's national moderation system to medical education and higher education settings. It examines the performance of different AI models, including various iterations of ChatGPT and other large language models, whilst also addressing crucial concerns around quality assurance, item security, pre-exposure risks, and the essential role of human oversight. Research on human-in-the-loop frameworks, ethical considerations, and regulatory responses provides a balanced perspective on how AI can augment, rather than replace, human expertise in assessment design. Additionally, the collection features content on emerging policy developments and practical guidance for organisations considering AI implementation, covering deployment options, regulatory compliance, and the transformation of quality assurance processes. These resources reflect both the potential efficiency gains and the significant challenges that must be navigated as the sector adapts to these technological advances. Disclaimer: The Test Community Network does not endorse the views, findings, or recommendations expressed in the linked content. These resources are provided for informational purposes to support professional knowledge and informed discussion within the assessment community.

Collection Items (32)

AI Adoption Playbook for UK Awarding Organisations

The "AI Adoption Playbook for UK Awarding Organisations" offers invaluable guidance for responsibly integrating AI into educational assessments. Featuring 37 practical strategies and expert advice, it...

📰 Content Content

Automated Question Paper Generator Using LLM

Research paper presenting an AI-powered system that automates question paper generation for educational institutions. The system uses Gemini-1.5-Pro and rule-based algorithms to create balanced, sylla...

📰 Content Research

Advancing Education: Evolving Assessments with AI

UNESCO MGIEP explores how artificial intelligence can transform educational assessment practices, addressing challenges in measuring complex cognitive skills needed for solving wicked problems like cl...

📰 Content Content

AI questions to be trialled in SATs moderator tests

The Standards and Testing Agency is piloting AI-generated questions in SATs moderator standardisation tests to reduce costs and school workload. The trial explores whether large language models can cr...

📰 Content News

Jump-Starting Item Parameters for Adaptive Language Tests

Research paper presenting a multi-task generalized linear model with BERT features to estimate test item difficulties for adaptive language assessments. The method rapidly improves difficulty estimate...

📰 Content Research

Automated Item Generation with Recurrent Neural Networks

Research paper exploring deep learning approaches for automated test item generation using recurrent neural networks, presenting an alternative to traditional human-written assessment items by impleme...

📰 Content Research