AI Safety Debate
Full Debate: Read Full DebateMartin Wrigley
Main Page: Martin Wrigley (Liberal Democrat - Newton Abbot)Department Debates - View all Martin Wrigley's debates with the Department for Science, Innovation & Technology
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Martin Wrigley (Newton Abbot) (LD)
It is a pleasure to serve under your chairship, Ms Butler. We have had some interesting contributions so far. I fully agree that we need to look at regulation, but I question whether we can regulate a technology. Today, every search we do is already powered by AI. To regulate a technology is a bit like trying to regulate a wheel rather than regulating the car. We need to look at how it is used and how it is then delivered.
We have talked about many different types of AI, and we must be clear that today’s artificial intelligence with pretrained generative output is different from the potential future of general AI, which is something else again and a whole new question. Today’s technology takes a question and gives us an answer.
Most of the harms that we have heard about in the debate are not new—they can already happen using other means—but AI makes them quicker, faster and easier to deliver, so what could have been done in PaintShop five years ago can now be done with AI in moments, and without the same levels of skill. There are not new harms; there are just new ways of using those tools. We need to look carefully at regulation and not focus too specifically on it as a technology, but think about the outcomes and how people are using. That is what needs to be regulated.
AI is very good at pattern recognition. Essentially what we see today in ChatGPT and others is the same technology that I was taught at university many years ago; it is just that now we have the compute power to run those neural networks that can recognise patterns. They are trained: feed them 5,000 pictures of a cat and they can identify a picture of a cat. It is slightly more subtle and advanced now, because we have added good natural language processing and large language models— that big data. We are feeding them much more data so that they can recognise more things. There is huge value and opportunity in that, which we must be careful not to regulate into insignificance.
I will say one other thing. There is a fundamental problem with AI: it is non-deterministic. Because it is recognising patterns, we cannot predict what it will do. Therefore, our current testing methodology of a known set of data, a known process and an expected set of outcomes cannot be relied on, because AI will give an answer that could be this, that or t’other. We must think about how we use it in processes and how we expect the output to be regular, because it will not be. We do not get the same answer twice—but that has been true of Google for a long time. If any hon. Member asked the same question as me on Google, they would get different answers.