Generative Artificial Intelligence: Schools Debate

Full Debate: Read Full Debate
Department: Department for Education

Generative Artificial Intelligence: Schools

Damian Hinds Excerpts
Tuesday 8th July 2025

(1 day, 19 hours ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Damian Hinds Portrait Damian Hinds (East Hampshire) (Con)
- Hansard - -

I beg to move,

That this House has considered the use of generative artificial intelligence in schools.

It is a great pleasure to serve with you in the Chair, Sir Jeremy. You will not be presiding over a heated political debate this afternoon, and I hope this is a good opportunity to openly discuss the enormous change that is upon us. Throughout the education world, a lot of thinking is being done about artificial intelligence and its implications. Most of that thinking is being done by teachers, and I hope we will contribute to some of the wider system and policy questions today.

In a November 2024 omnibus survey, half of teachers said they had already used generative AI in their role; about a quarter said they had no plans to; and the rest either planned to or did not yet know. The most common uses of generative AI are creating resources, planning lessons and communicating with parents.

In the same survey, 6% of teachers said that pupils were permitted to use AI in the work they are set. It is hard to pinpoint an exact number, but it is fairly safe to say that the proportion of pupils actually using AI to some degree in their work is rather more than 6%. The survey data that we have, incomplete as it is, suggests that somewhere between one in three and three in four children are using AI to some degree, with some frequency, in their homework.

In this rapidly changing world, I commend the Department for Education for its guidance update in January and the materials that came out on 10 June, made by the Chiltern Learning Trust and the Chartered College of Teaching. Those materials are easily accessible, but that does not mean that masses of people have seen them. This remains an area in which a lot of communicating remains to be done.

The DFE guidance talks about balancing the safety considerations of AI with the opportunities. Those are definitely two considerations, but they are not the only considerations, nor are they the most obvious. We always have to remember that at whatever pace we work at in this place, or that Whitehall works at, kids will work at about six times that pace.

The four areas I will briefly cover today are two areas of opportunity and two areas of risk, where we need some caution. The areas of opportunity are workload and enhancing learning, particularly for children with special educational needs. The areas of risk are the need for discerning and careful use of generative AI by pupils and, finally, the impact on homework, assessment and exams.

Workload is a big issue for teachers. Historically, along with pupil behaviour, it has often been the No. 1 issue in teacher retention. In a 2019 survey, it was encouraging to see that the reported workload had reduced by about five hours a week, or an hour a day. However, workload remains a stubborn issue, and the biggest contributors have been planning and preparation, marking, and data entry and analysis. There is also communicating with parents, dealing with behaviour issues and so on, but those first three all lend themselves to artificial intelligence.

In particular, the creation of teaching materials seems to be an enormous area of opportunity. Too many teachers spend their Sunday evenings at home, trawling the internet for resources to use during the working week, when much of that work could be done for them. I commend Oak National Academy for its work on the AI learning assistant.

As the Education Committee, chaired by the hon. Member for Dulwich and West Norwood (Helen Hayes), discussed just this morning, the national curriculum is, of course, a framework. It is not a precise set of things that children will learn, because we need diversity in provision. We therefore need to think about how AI can support that diversity. I hope the Minister will give us an update on the content store announcement of August 2024.

AI has plenty of other potential uses, such as for timetabling, for letters and emails home—although my special request would be that we do not add to the volume of communications that have to be consumed—and for report writing. But we need a clear code of practice, because for the trust of parents, and indeed of pupils, there needs to be clarity about when and in what ways generative AI has been used. Care will still be needed. How does a teacher tell children that they must write their own work, if that teacher is generating text through a machine?

The second area of opportunity is in supporting learning. There is clearly a lot of potential for learning personalisation, especially for certain types of special educational need, alongside the role of assistive and adaptive technology. For some subjects, AI will also be useful for dynamic assessment. But schools will have the same issue with AI as with all educational technology, which is that they generally do not have much idea which bits of it are any good. There are both great products and pretty low-grade products, and discerning one from the other is notoriously difficult. As with edtech in general, but even more so with AI, product development cycles do not lend themselves to randomised controlled trials to try to establish what is good. I suggest that schools require an extension of the principle established with the LendED product, which is made with the British Educational Suppliers Association: a sort of forum where teachers can see these products, distil the good and bad, and crucially get recommendations from other schools and teachers, because teachers tend to trust teachers.

Afzal Khan Portrait Afzal Khan (Manchester Rusholme) (Lab)
- Hansard - - - Excerpts

With the input of biased data, large language models are liable to produce inappropriate, biased or factually incorrect outputs. Does the right hon. Member agree that if generative AI is to be rolled out to schools, actions such as introducing statutory regulations must be taken to limit any bias or misinformation taught to children and young people?

Damian Hinds Portrait Damian Hinds
- Hansard - -

Funnily enough, I agree with the hon. Member, though not necessarily about statutory requirements. It is certainly true—in fact, he inadvertently leads me on to my next point—that we need to be careful and discerning in using these products. There are many risks, including the safeguarding risks inherent in technology, hallucinations, dud information and, as the hon. Member rightly says, biases.

There are some very direct and sharp risks to children. I am afraid that many misleading, unpleasant and cruel things can be done with AI. They can be done already but, as with so many other things, AI magnifies and turbocharges the problem. Some of those things can be done by adults to children; some of them are done by children to other children. We need to be very aware of those risks, some of which relate to existing practices and policy questions, such as how to deal with intimate image abuse and sexting. The problem further supports the case for a comprehensive school-day phone ban, to take cameras out of schools.

More generally, there is a need for media literacy and general discernment. I am reluctant and nervous to talk about media literacy, and more so about the phrase “critical thinking,” because it is too often conflated with the false dichotomy that occasionally comes up in the educational world: knowledge versus skills. Clearly, we need both. We need both in life, and we need to have developed both in school, but knowledge precedes skills because we can only think with what we know. However, it is really important in this context that children know how AI can make mistakes, and that they come to trust and know to look out for the correct primary sources, and trusted brands—trusted sources—rather than just stuff on the internet.

In the 2019 guidance on teaching online safety in schools, since updated, a fusion of the computing, relationships and citizenship curricula was envisaged. Children would be guided through how to evaluate what they see online, how to recognise techniques used for persuasion, and how to understand confirmation bias, as well as misinformation and disinformation. The new edition of “Keeping Children Safe in Education”, which came out yesterday, lists disinformation and misinformation as safeguarding concerns in their own right for the first time. The online safety guidance also included the importance of learning why people might try to bend the truth on the internet and pretend to be someone they are not. That was a start, but at this technological inflection point, it needs a huge scaling up.

Steve Yemm Portrait Steve Yemm (Mansfield) (Lab)
- Hansard - - - Excerpts

Does the right hon. Gentleman share my concern about some of the dangers of using generative AI in the classroom, particularly around harmful content and activity? I read the National Society for the Prevention of Cruelty to Children’s “Viewing Generative AI and children’s safety in the round”, which gave examples of children creating deepfakes of other children in the class.

Does the right hon. Gentleman also share my concerns about children’s privacy and data protection, and the extent to which many of these edtech applications are created with the aim of minimising data protection? I understand he has concerns about regulation, but this seems to be almost entirely unregulated in the classroom. There is certainly a case for, at the very least, regulating data protection, data to third parties and—

--- Later in debate ---
Jeremy Wright Portrait Sir Jeremy Wright (in the Chair)
- Hansard - - - Excerpts

Order. That was either several interventions or a speech, neither of which is permissible. I urge all participants to keep interventions brief.

Damian Hinds Portrait Damian Hinds
- Hansard - -

The hon. Member for Mansfield (Steve Yemm) should not misunderstand me, as I am not against regulation. His points about data protection and privacy are really important, although they are probably too big to fold entirely into this debate. His first group of points and what the NSPCC talks about are the same risks that I am talking about.

There is an even broader point, as there is already a lot of blurring between fact, fiction and opinion online. There are all manner of news sources and influencers, network gaming, virtual reality and augmented reality, the metaverse—the whole concept of reality is a little hazier than it once was. With these machines, which in some cases almost seem to have a personality of their own, there is a danger of yet more blurring.

We all shout at our PCs sometimes. Indeed, adults using AI may start to give human form, which is called anthropomorphism, to the machine they are interacting with—I occasionally try to be polite when I interact with one of these interfaces. Apps such as character.ai take that to another level.

We have to think about the impact on children in their most formative years—on their sense of self, their understanding of the world and their mental wellbeing. That includes the very youngest children, who will be growing up in a world of the internet of things and connected toys. It will be that much more important to draw a line between what is real, what is human, and what is not. In time, when the system has had enough time to think about it—we are not nearly there yet—that may be yet another area for regulation.

Finally, I come to the most immediate risks, around homework, assessments and exams. Colleagues may already have had a conversation in which a teacher has said, “Isn’t it brilliant how much so-and-so has improved? Oh, hang on—have they?” They now cannot be absolutely certain. There are AI detectors, but they are not perfect. They can produce false positives. In other words, they can accuse people of plagiarising using AI when they are not. In any event, there is an arms race between the AI machine and the AI detector machine, which is deeply unsatisfactory. Of course, that is where the teacher’s skill comes in, because there is always classwork to compare. Most importantly, there is always the exam itself, and we need to keep it that way.

The safest way to protect the integrity of exams is for them to be handwritten in exam conditions, with a teacher walking up and down between the desks—not quite for everybody, but for the vast majority of children, except where a special educational need or disability requires another arrangement. There are also subjects, such as art, design and technology and computer science, where it would not be appropriate.

There is already a big increase in access arrangements for exams. A particular type of adjustment, called a centre-delegated arrangement, does not need approval from the exam board, so no data on it is available. One such centre-delegated arrangement is to allow the child to use a keyboard—in the rubric it is called a word processor, which is a delightfully archaic term.

If children are allowed to use a keyboard, spellcheck and AutoText are disabled, to ensure safeguards are in place—but it is still true that most people can type faster than they can write, so there is a disparity in the two formats. The regulations require a school’s special educational needs co-ordinator to decide whether a child is able to use that facility, but they are still quite loose in that they refer to the keyboard being the child’s

“normal way of working at school”.

I would love the Minister to say a word about that. The Department for Education should be clear that, where such arrangements are made, it should be because of a special educational need or disability.

Afzal Khan Portrait Afzal Khan
- Hansard - - - Excerpts

One concern I am beginning to feel is that, while acknowledging that the technological development is important, an over-reliance on generative AI runs the risk of limiting open-mindedness, independent thinking, literacy and creative skills. Does the right hon. Member agree that we must protect key critical thinking and reasoning skills in children and young people, for their future and ours?

Damian Hinds Portrait Damian Hinds
- Hansard - -

The hon. Gentleman makes his point lucidly and well, and I think it stands on its own feet.

The bigger issue with more children taking exams on a keyboard rather than on paper is that exam boards would like to move entire exams online for all children. In a sense, that would be better because it would be equal; there would not be any difference in the speed of writing and typing.

Some might ask what is wrong with that, as long as it is the same for everybody, and as long as the internet, spellcheck and autocorrect are disabled. I suggest there would still be multiple types of security risk in having exams done en masse online. There is also a wider problem: if a GCSE is done online, how will students do a mock GCSE? They will do it online. How will someone do a year 9 exam? Hon. Members can see where I am going with this. It cascades further and further down the age range, until eventually people question why they are learning to write with a pen at all. Some are already asking that question.

By the time my child is an adult, people will not even be using a keyboard, but other types of communication and interface, and this will seem very archaic. There are important advantages to learning to write by hand, however. Handwriting and writing are not the same thing. The way someone develops their handwriting, learning the strokes and patterns and how letters join together is an important part of learning the skill of wider writing. There is also plenty of evidence that making marks on a page by hand aids visual memory. Handwriting helps us to understand things because, as we write, we synthesize what we are reading or hearing into our own words. There is even evidence to suggest that people do better in tests and have better recall as a result. Maintaining handwriting is therefore important in its own right, quite apart from maintaining the security and integrity of examinations.

DFE guidance states that teachers should keep up to date with this rapidly changing world. That is a tough ask. Over the months and years ahead, the Department will have to do a lot to provide teachers and school leaders with bite-sized, easily digestible chunks of information to keep them up to date with this rapidly changing area. A recent Ofsted report, on 27 June, said that there was not yet enough evidence to conclude what constitutes a good use of AI in schools, but that one common approach among schools that seemed to be using AI successfully was to have a champion who spreads good practice throughout the school. That seems to me a good approach.

Sarah Hannafin of the National Association of Head Teachers stated:

“The technology should be introduced gradually…to maximise its potential and mitigate the risks.”

That is an important point. Most immediately, I implore the Minister not to allow all exams to go digital en masse, except for certain subjects where that makes sense, and except, of course, for an individual child for whom that is the right thing because of their special educational need or disability.

I contend that there should be no rush to move to online exams. There might be opportunities, lower costs or easier administration involved, but there are certainly also risks, some of which are immediate and some of which would manifest only over time and might take us a long time to spot. If we do move en masse to online exams and away from pen on paper, I promise hon. Members that we would never go back. A cautious approach is what is required.

--- Later in debate ---
Stephen Morgan Portrait Stephen Morgan
- Hansard - - - Excerpts

I will certainly take that back. I have had discussions with colleagues at the Department for Science, Innovation and Technology and others about reliability, safety and biases.

In November last year, with the Under-Secretary of State for Science, Innovation and Technology, my hon. Friend the Member for Enfield North (Feryal Clark), I met leading global tech firms, including Google, Microsoft and Adobe, to agree safety expectations and to ensure that AI tools are safe for classroom use. We are also supporting staff to use AI safely. In partnership with the Chiltern Learning Trust and the Chartered College of Teaching, we have published online support materials to help teachers and leaders to use AI safely and effectively, developed by the sector, for the sector. They supplement the Department’s AI policy paper—which we updated in June—alongside the information for educators about using AI safely and effectively, and the toolkit for leaders to help address the risks and opportunities of AI across their whole setting.

To develop our evidence base, we have launched two pilot programmes, the edtech evidence board and the edtech testbed. The first is to ensure that schools have the confidence to secure edtech products that work well for their setting, and the second is to evaluate the impact of edtech and AI products on improving staff workload, pupil outcomes and inclusivity. I want to assure all hon. Members that we will continue to work with schools to support them in harnessing opportunities and managing potential challenges presented by generative AI.

A number of hon. Members, including the Liberal Democrat spokesperson, the hon. Member for Guildford (Zöe Franklin), spoke about social media, and “Keeping children safe in education” is statutory guidance that provides schools and colleges with robust information on how to protect pupils and students online. The guidance has been significantly strengthened with regard to online safety, which is now embedded throughout, making clear the importance of taking a whole-school approach to keeping children safe online. The DFE is working across Government to implement the Online Safety Act 2023 and to address technology-related risks, including AI in education. I can assure the hon. Member for Guildford that it is a priority for us to ensure that children benefit from its protections.

On the point that a number of hon. Members made about the impact on qualifications, assessment and regulation, the majority of GCSE and A-level assessments are exams taken under close staff supervision, with no access to the internet. Schools, colleges and awarding organisations are continuing to take reasonable steps to prevent malpractice involving the use of generative AI in formal assessments. Ofqual is, of course, the independent regulator of qualifications and assessments, and published its approach to regulating AI use in the qualifications sector in 2024. Ofqual supported the production of guidance from the Joint Council for Qualifications on the use of AI in assessments. That guidance provides teachers and exam centres with information to help them to prevent and identify potential malpractice involving the misuse of AI.

More broadly, the curriculum and assessment review’s interim report acknowledged risks concerning AI use in coursework assessments. The review is taking a subject-by-subject approach to consider assessment fitness for purpose and the impact of different assessment methods on teaching and learning. I assure Members that the review is considering potential risks, the trade-offs with non-exam assessment such as deliverability, and the risks of malpractice and to equity.

Damian Hinds Portrait Damian Hinds
- Hansard - -

There are two simple safeguards against misuse of AI in exams here in front of me. Will the Minister recognise that the best way to ensure the security and integrity of exams, and how assessment is done lower down the school, is—for the great majority of children, in the majority of subjects—for exams to be handwritten in exam conditions?

Jeremy Wright Portrait Sir Jeremy Wright (in the Chair)
- Hansard - - - Excerpts

For the assistance of Hansard, I point out that the right hon. Gentleman was holding up a pen and paper.

--- Later in debate ---
Damian Hinds Portrait Damian Hinds
- Hansard - -

I was happy not to wind up, but you have now made me stand up, Sir Jeremy. We have had a good and constructive debate. I am grateful to the Minister for his engagement, and to all colleagues for taking part.

Al Pinkerton Portrait Dr Al Pinkerton (Surrey Heath) (LD)
- Hansard - - - Excerpts

Please accept my apologies for my late attendance in the Chamber. I was at the statement in the main Chamber on the Horizon scandal, which is perhaps another example of overreliance on technology—the human eye was identifying issues that people could see. My experience comes mostly from the higher education sector, where colleagues I have spoken to report far greater incidence of the use of AI. It is so clever that it is generating false sources to back up incorrect claims, but with incredibly plausible use of academic names in order to make profound points. I wonder whether we now face a reality in which AI might be used not only for marking, but for the marking of AI-generated material.

Damian Hinds Portrait Damian Hinds
- Hansard - -

Indeed—computers talking to computers, with us as the facilitators. The hon. Gentleman makes a good point.

I will conclude by repeating something I said much earlier in my remarks. We should always remember that, at whatever pace we, the education system or, certainly, Government can work, young people will work at a pace six times faster. I am, again, grateful to the Minister.

Question put and agreed to.

That this House has considered the use of generative artificial intelligence in schools.