Generative Artificial Intelligence: Schools Debate
Full Debate: Read Full DebateHelen Hayes
Main Page: Helen Hayes (Labour - Dulwich and West Norwood)Department Debates - View all Helen Hayes's debates with the Department for Education
(1 day, 23 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
It is a pleasure to see you in the Chair, Sir Jeremy. I congratulate the right hon. Member for East Hampshire (Damian Hinds) on securing this important debate.
The use of generative artificial intelligence in education is a critical challenge of our time. As parliamentarians, we bear the responsibility for ensuring that this new technology is harnessed to support teachers and examining bodies and to enhance learning, while safeguarding our children’s intellectual and emotional growth and not undermining the critical skills and values they need for the future.
Although generative AI presents some distinct challenges, it sits within a suite of technology-related issues that have all significantly changed the landscape in which our children are growing up and being educated, including the use of smartphones and other devices and engagement with social media. Every generation of parents and teachers has to support children and young people to navigate something that they did not have to contend with in their own childhood—and so it is for our generation. We must understand the power and potential of AI and other technologies, and we must understand in detail the risks and threats. We must also give our teachers and school leaders, our children and young people the tools they need to harness its potential with good ethical and critical thinking, while safeguarding their wellbeing.
Generative AI holds immense promise across a range of applications in our schools. There are important potential applications in the context of rising teacher workloads, which it is vital to address if we are to improve the recruitment and retention of teachers in our schools; but the use of AI for lesson planning, assessment and marking cannot be a substitute for subject experts who work in person with their students, providing tailored teaching to meet the needs of individuals in the classroom.
It is important that older pupils have a good understanding of the benefits, weaknesses and threats of emerging technologies such as generative AI. At its best, generative AI offers the potential to accurately summarise lengthy and complex technical texts in a way that is easy for a layperson to understand, or to generate computer code to achieve much more than an experienced computer scientist could over a period of months. There are potential applications for children with special educational needs and disabilities, too.
However, the promise of AI comes with potential peril. Over-reliance on generative AI risks eroding children’s critical thinking and independent learning. The Parliamentary Office of Science and Technology warns that AI tools, if misused, can reduce students to passive recipients of unreliable, biased and potentially hallucinated pre-generated content, undermining the cognitive struggle essential for deep learning. Studies suggest that excessive dependence on AI for problem solving can weaken analytical skills, as students bypass the iterative process of reasoning and reflection. The ability to assess ideas critically for their origin and value could be fundamentally affected. That is particularly concerning for subjects requiring interpretive or creative thought, where AI’s efficiency may shortcut the development of original ideas. If children lean too heavily on AI, we risk nurturing a generation skilled at consuming information, but less adept at questioning, critiquing or innovating.
Beyond our schools and classrooms, generative AI has potential in aiding and assisting exam boards in the accurate and fair assessment of public examinations and becoming an invaluable tool in our universities and workplaces. However, alongside the potential benefits, we are already seeing significant harms that AI can inflict through the generating of convincing altered deepfake images and their use in the appalling bullying and exploitation of some children and young people.
That concern is amplified within the broader context of screen time. Our predecessor Education Committee’s inquiry into screen time last year revealed a 52% surge in children’s screen use from 2020 to 2022, linked to declines in attention, sleep quality and mental wellbeing. Generative AI, which is often accessed via screens, must be integrated thoughtfully to avoid exacerbating those trends. Vulnerable children, those facing socioeconomic hardship, neurodiversity or mental health challenges, are particularly at risk. The Parliamentary Office of Science and Technology briefing on AI and wellbeing notes that those students may benefit most from AI’s accessibility, but they are also most susceptible to its potential harms, such as reduced agency or exposure to inappropriate content.
We are already seeing the profound impact of AI in education, from schools rethinking their approach to homework to universities reverting to traditional in-person exams. Sam Illingworth of Edinburgh Napier University has argued that we need to think about how we can tailor the assessment of students and provide better and more creative support for their learning, and work to that end is ongoing in universities. These shifts may signal that we need a more fundamental re-evaluation of how we design learning and assessment in this new technological era.
What must be done? First and foremost, the Department for Education must provide clear and robust guidance on the ethical use of generative AI in schools. Our predecessor Committee rightly called for urgent legislation to regulate AI, warning that the pace of technological advancement risks outstripping our ability to legislate effectively, with potentially harmful consequences for children. It is imperative that AI developers are held accountable for how children’s data is used, particularly where those children are below the digital age of consent. Indeed, there are strong arguments, which I support, for increasing the digital age of consent from 13 to 16. Safeguards must be put in place to ensure transparency in AI-generated content, prevent over-reliance on automated tools and preserve essential skills such as critical thinking.
Secondly, my Committee has recently heard about the importance of prioritising digital literacy across the board. Teachers, students and parents need training to understand AI’s mechanics, biases and limitations. An informed educator can guide students to use AI as a tool for exploration, not a crutch for answers.
Finally, we must champion the irreplaceable value of human connection. No algorithm can replicate a teacher’s empathy, a student’s curiosity or the spark of collaborative discovery. AI must be used to enhance those relationships, not to supplant them.
The choices we make today will shape the minds of tomorrow. If we fail to balance AI’s potential with its risks, if we fail to regulate appropriately, if we fail to fully understand this technology and the opportunities and risks it presents, we may compromise the critical thinking skills that define an educated society and we may undermine the values that we seek to promote. Let us act decisively to harness generative AI as a servant of learning, not its master.