
The Department of Education published its Safe and Effective Use of AI in Education: Leadership Toolkit this week to guide educators in thoughtfully integrating artificial intelligence (AI) into teaching and learning.
This report highlights the transformative potential of AI to personalise instruction, reduce workloads, and foster inclusive classrooms. However, it also underscores the importance of addressing the risks and unintended consequences that accompany the use of AI in educating young people.
AI’s role in personalised learning
A central theme of the toolkit is AI’s ability to personalise learning, tailoring educational materials to individual student needs:
- AI can simplify complex texts for students with lower reading levels, ensuring they remain engaged in classroom discussions without feeling excluded.
- Students learning English as an additional language (EAL) can access translations or customised resources that help them better understand lessons.
Such applications not only enhance learning but also ensure students feel valued and included, regardless of their abilities or background.
For students with special educational needs and disabilities (SEND), AI tools can be particularly transformative:
- Assistive technologies such as speech-to-text tools can help students who struggle with writing, while AI-powered systems can describe scenes for visually impaired learners.
- These tools empower students to participate more fully in educational activities, fostering independence and inclusion.
- AI’s multilingual capabilities also support EAL students and their families, helping to bridge communication gaps and create a more inclusive learning environment.
Supporting workload reduction for teachers
The toolkit highlights how AI can alleviate the heavy workloads faced by teachers. AI tools can generate:
- Lesson plans
- Quizzes
- Differentiated worksheets tailored to varying student abilities
For example, an AI tool might create multiple-choice questions aligned with the national curriculum, freeing up time for teachers to focus on direct student engagement and other high-impact activities.
Teachers must always review AI-generated feedback to ensure it is accurate.
AI also supports feedback and assessment processes. Systems can analyse student work and provide detailed feedback, such as identifying recurring grammatical errors or suggesting ways to strengthen an argument. These tools are particularly valuable in large classrooms, where providing individualised feedback can be time-consuming. However, the toolkit stresses that teachers must always review AI-generated feedback to ensure it is accurate, relevant, and aligned with learning objectives.
Expert Daisy Christodoulou has highlighted that AI tools can provide instant feedback on student work, enabling teachers to identify gaps in knowledge and adjust instruction accordingly.
- For example, an AI system might analyse student responses to a maths problem and flag misconceptions, allowing the teacher to intervene quickly and effectively.
- Christodoulou notes that this ability to offer timely, targeted insights can transform the way educators approach assessment, making it more dynamic and responsive to student needs.
Leadership knowledge gaps and strategic AI use
One of the most critical points raised in the toolkit report is the lack of theoretical and pedagogical knowledge among many leaders tasked with making decisions about AI adoption. There is a danger that decisions about AI could be driven by external pressures to adopt new technologies rather than a deep understanding of how these tools align with educational goals.
Do not feel pressurised to select an AI tool until you are ready.
As Rose Luckin, emeritus professor of learner-centred design at UCL London, cautions in the report:
“I always say to educators, learn fast, but act more slowly. Do not feel pressurised to buy into an AI. Do not feel pressurised to select an AI tool until you are ready. It’s really important that you learn enough about artificial intelligence first so that you can decide what purpose you want the AI to serve you and then design the way that you interact with AI strategically.”
This advice highlights the importance of learning and preparation before implementation. Without a clear understanding of pedagogy and AI’s capabilities, leaders risk introducing tools that fail to address educational challenges or unintentionally inhibit cognitive development.
Risks and unintended consequences
While AI offers significant benefits, the toolkit identifies several risks that educators must address:
- Over-reliance on AI
Students may bypass critical learning processes, such as writing or problem-solving, by relying on AI tools to complete tasks. For example, a student might use an AI chatbot to generate an essay, missing the opportunity to develop their own skills. - Bias in AI systems
AI tools are trained on large datasets, which may contain inherent biases. This can lead to outputs that reinforce stereotypes or exclude diverse perspectives. For instance, an AI-generated history lesson might present a narrow, Eurocentric view of events.
Teaching is fundamentally relational, built on empathy and the ability to respond to students’ unique needs.
- Inaccuracy in outputs (hallucinations)
AI systems can generate plausible-sounding but entirely incorrect responses. Examples include fabricated historical facts or misinterpreted scientific concepts. Educators must carefully verify AI outputs to prevent the spread of misinformation. Highly educated professionals have already fallen into this trap. On June 6, 2025 Dame Victoria Sharp, the president of the King’s Bench Division of the High Court, warned against the misuse of AI in legal work following cases involving fabricated case-law citations. She highlighted the dangers of plausible but false AI outputs and urged lawyers to verify their work, stressing that ethical responsibilities must be upheld to maintain trust in the justice system. - Ethical and data privacy concerns
Using student work in AI systems without consent could violate intellectual property rights and expose sensitive data to misuse. Compliance with data protection laws, such as UK GDPR, is essential to safeguard student work. Likewise scanning in copyrighted resources such as textbooks is also of concern. - Impact on human interaction
Teaching is fundamentally relational, built on empathy and the ability to respond to students’ unique needs. Over-reliance on AI risks diminishing these critical aspects. For example, while automated feedback systems can save time, they lack the personal touch of teacher comments, which can motivate and inspire students in ways that AI cannot.
Best practices for AI implementation
To mitigate these risks and maximise AI’s potential, the toolkit offers several recommendations:
- Human oversight: Teachers and leaders must critically evaluate AI-generated content to ensure it aligns with educational objectives.
- Foster critical thinking: Students should be encouraged to evaluate AI-generated information critically. For example, teachers can design activities where students fact-check AI responses, fostering analytical skills.
- Enhance, don’t replace: AI should complement, not replace, traditional teaching methods. For instance, a teacher might use AI to generate a lesson outline but still design interactive activities to promote deeper learning.
- Professional development: Schools should provide ongoing training to help teachers use AI tools effectively and safely.
- Start small and scale thoughtfully: Begin with pilot projects in specific areas, such as automating feedback or creating personalised resources, before expanding AI use more broadly.
Conclusion
The Safe and Effective Use of AI in Education: Leadership Toolkit offers a comprehensive guide to integrating AI into teaching and learning.
While AI has the potential to revolutionise education by reducing workloads, personalising instruction, and supporting inclusion, its benefits can only be fully realised if implemented thoughtfully and ethically.