Research Sprint Webinar: What are the impacts of using AI to author question items?
Assessment professionals face an urgent need to understand not just whether AI can write test items, but what happens when it does, to quality, validity, security, fairness, ownership, and the very nature of assessment practice itself.
AI tools are already authoring test items in classrooms and examination halls worldwide. But are we truly understanding what happens when algorithms replace, or augment, human item writers?
What are the impacts of using AI to author question items?
This isn't an abstract, future-focused question. Organisations are making decisions today about AI adoption that will shape assessment practices for years to come. Students are encountering AI-authored items in high-stakes exams. Legal precedents are being set. Security vulnerabilities are emerging.
What You'll Learn
Join four leading experts as they share insights from the frontlines of AI-powered assessment authoring:
Legal realities: Who owns AI-generated items? What are the liability implications?
Quality assurance: How do review processes change when humans aren't the original authors?
Security risks: What new vulnerabilities does AI authoring introduce?
Research evidence: What do studies reveal about bias, validity, and cultural fairness?
Practical guidance: What should assessment professionals be doing right now?
This isn't just another webinar, it's the launch of a collaborative Research Sprint where the assessment community will work together to produce practical guidance on AI authoring impacts.
Why This Matters to You
If you're considering AI authoring tools:
Get evidence-based insights to inform your procurement and implementation decisions.
If you're already using AI:
Learn about risks you may not have considered and quality assurance approaches from peers.
If you're concerned about AI in assessment:
Understand the full landscape of impacts, both opportunities and challenges.
If you're researching this space:
Connect with practitioners facing real-world implementation questions and contribute to community knowledge.
Meet Your Expert Panel
Miia Nummela-Price
Legal Expert
Miia brings deep expertise in the intersection of technology, education, and law. As a multi-specialist legal leader focusing on complex tech and IT contracts, intellectual property rights, data protection and governance and emerging technology compliance, she also counsels organisations on the legal implications of AI adoption. Her work also focuses on helping organisations navigate the complex questions of ownership, copyright, and liability that arise when AI systems generate test content.
What Miia will address:
Copyright status of AI-generated assessment items
Ownership implications: organisation vs. AI provider
Liability considerations when AI-authored items prove flawed
Contractual issues and vendor agreements
Current regulatory landscape and emerging legal frameworks
Neil has spent decades refining the craft of assessment item writing and leading item development teams. His expertise spans traditional authoring workflows, subject matter expert collaboration, and quality assurance processes. As AI tools have entered the assessment space, Neil has been at the forefront of understanding how these technologies reshape the roles of item writers, reviewers, and SMEs, and what new skills assessment professionals need to develop.
What Neil will address:
How AI is changing item authoring workflows in practice
The evolving role of subject matter experts
Quality assurance challenges when reviewing AI-generated content
The challenges faced when developing technology solutions for authoring with AI
Sergio's research focuses on test security, item exposure, and the protection of assessment integrity. He has published on pre-exposure risks, cheating detection, and emerging threats to secure testing. With AI-generated content entering the assessment ecosystem, Sergio has been investigating new vulnerability vectors, from training data exposure to algorithmic predictability, that security-conscious organisations must consider.
What Sergio will address:
Pre-exposure risks specific to AI-authored items
Security vulnerabilities in AI authoring workflows
Detection challenges for compromised AI-generated content
Best practices for securing AI authoring processes
Karl researches practical AI implementation in education, focusing on quality assurance and ethical frameworks. His work examines how organisations can responsibly deploy AI systems while maintaining quality standards and human oversight throughout the development process.
What Karl will address:
Applied ethical frameworks for AI assessment development across the entire lifecycle
AI tool selection criteria: what to evaluate and test before implementation
Human-in-the-loop review: where human expertise remains essential and non-negotiable
Validation and quality control: construct validity, difficulty calibration, and managing non-deterministic AI behaviour
Feedback-driven improvement: using systematic feedback loops to close gaps between AI capability and quality requirements
Assessment directors and managers considering AI authoring tools
Item writers and content developers navigating workflow changes
Psychometricians concerned with validity and fairness
Test security professionals identifying new risks
Legal and compliance teams managing AI contracts
Researchers studying AI in educational measurement
Academic faculty exploring assessment innovation
EdTech leaders developing or evaluating AI solutions
The Test Community Network brings together assessment professionals through expert-led events, insightful discussions, and connections with the latest technologies and services. Join our community to stay ahead of the curve and shape the future of assessments.