Monday 24th July 2023

(9 months, 1 week ago)

Lords Chamber
Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Motion to Take Note
15:53
Moved by
Lord Ravensdale Portrait Lord Ravensdale
- View Speech - Hansard - - - Excerpts

That this House takes note of the ongoing development of advanced artificial intelligence, associated risks and potential approaches to regulation within the UK and internationally.

Lord Ravensdale Portrait Lord Ravensdale (CB)
- Hansard - - - Excerpts

My Lords, I first declare my interest as a project director working for Atkins and by noting that this is not just my debate: a number of noble Lords right across the Cross Benches put forward submissions on this topic, including the noble Baroness, Lady Kidron, the noble and right reverend Lord, Lord Harries, and the noble Lord, Lord Patel.

There are a couple of reasons why I was keen to put this forward. First, as we have seen recently, the rapid advancement of AI has been brought into sharp relief by the ongoing development of large language models, some of which are likely to be able to pass the famous Turing test for machine intelligence, which has been something of a benchmark for it over the past 50 years. The questions of existential risk of that technology have also resurfaced. AI is the most important technology of our generation. It will cause significant upheaval right across the economy, good and bad. We in Parliament need to be thinking about this area and talking about it a lot more than we are. It should be right at the top of our agenda. Secondly, there is the matter of timing. The Government released their White Paper earlier this year but, in some respects, this has been overtaken by events and the Government appear to be rethinking aspects of their strategy. Therefore, this is the perfect time for this House to express its views on the issues before us, to help inform forthcoming government responses.

In my work as an engineering consultant, I have worked on and seen the continued advancement of these technologies over the years. Several years ago, one of my projects involved the transfer of large amounts of engineering data—the complete design definition of a nuclear reactor, hundreds of thousands of documents —from an old to a new computer system. A proposal was developed to get eight graduate engineers sitting at desks and manually transferring the data, which would have been a terrible waste of talent. We sat down with our brightest young engineers, and they investigated and developed a smart algorithm which worked in the graphical user interface, just as a human would. It was effectively a software robot to undertake this work and replace human workers. This was able to crunch through the entire task in minimal time—a matter of months—saving hundreds of thousands of pounds and thousands of hours of engineering effort.

Across the industry, we are starting to see automation coupled with AI and machine learning to learn how to resolve discrepancies and data from past experience, continuing that process of freeing up humans in clerical jobs for more value-added work. This is one example of the huge benefits that the AI revolution is having and will continue to have on society. Anyone who has read Adam Smith’s The Wealth of Nations and his description of the pin factory sees the logic and economic benefits of increasing specialisation—but also the drudgery of work that can result from that. Among many other benefits, AI will continue that process of freeing people up from repetitive, inane tasks and on to more value-added work, increasing human happiness along with it.

However, then we have the flip side, the concerns around risks, all the way up to existential risks. We live in a radically uncertain world, a terminology coined by the noble Lord, Lord King of Lothbury, and John Kay. There has been much hyperbole around AI risk in recent months, but we need to take those risks seriously. Just as Martin Weitzman put forward his very elegant argument around the rationale for investing large amounts of money today on climate change, based on the tail risks of worst-case scenarios, so too we should consider the tail risks of where massive increases in digital compute and the potential emergence of a superintelligence —something that far exceeds human intellectual capabilities in every area—will take us, and invest appropriately based on that.

There are no historical parallels for the technological singularity that AI could unleash. Perhaps one instructive episode would be the fate of the Aztec civilisation. The Aztecs had existed in the world as they knew it for many thousands of years, with no significant change from one year to the next. Then one day in 1519, the white sails of the fleet of Cortés appeared on the horizon, and nothing was ever the same again. Within months, Cortés and his few hundred men had conquered the vast Aztec empire, with its population of millions, one of the most remarkable and tragic feats in human history. To avoid perhaps one day in the coming decades seeing a version of the white sails of Cortés on our own horizon, we must carefully consider our approaches now to this rapidly developing technology and manage the risks. That means regulation. It just will not do to let the private sector get on with it and hope for the best.

What should this mean for regulation and legislation development? The key point for me is that the Government cannot effectively regulate something that they do not adequately understand. I may be wrong, but I do not think that the Government, or any noble Lord here today, will have a fully thought-through plan of action for the regulation of AI. We are in a highly unpredictable situation. To this end, the first thing that we need to think about is how we can implement a sovereign research capability in AI which will develop regulation in parallel.

Research and regulation are different sides of the same coin in this instance. We need to learn by doing, we need agencies that can attract top-class people and we need new models of governance that enable the agility and flexibility that will be required for public investment into AI research and regulation. Any attempt to fold this effort into a government department or a traditional public research organisation is simply not going to work.

So how should we go about this? It was a great privilege a few years back to help shape the Advanced Research and Invention Agency Act, and I am very pleased to see ARIA moving forward at this albeit early stage. There are a number of things we can draw from it regarding how we approach AI capability. AI capability is exactly the sort of high-risk, high-reward technology we would expect ARIA to be investing in. But if we agree that AI needs a research focus, we could perhaps set up a different organisation in the same way as ARIA, but with a specific focus on AI, and call it ARIA-AI; or we could even increase funding and provide that focus to an existing part of ARIA’s organisational set-up.

In Committee on the ARIA Bill, we debated extensively the potential to give ARIA a focus or aim similar to that of the United States’ Defence Advanced Research Projects Agency, and ARPA-E. The Government wanted ARIA to maintain the freedom to choose its own goals, but there is an opportunity now to look at this again and use the strengths of the ARIA concept—its set-up, governance structures and freedom of action—to help the UK move forward in this area.

This is similar in some respects to the “national laboratory” for AI proposed by Tony Blair and the noble Lord, Lord Hague, in their recent report. This research organisation, along with the AI task force, if set up in the right way, would advance research alongside regulation, enable a unique competitive advantage for the UK in this area and begin the process of solving AI safety problems.

This will all need to be backed up by the right levels of government support. This is one of those areas where we should fully commit as a nation to this effort, or not press on with it at all. I can think of a number of such examples. The Government’s aspiration to build up to exascale levels of computing by 2026 is very welcome but would give the entire British state the ability to train only one GPT-4 scale model four years after OpenAl did. In addition, before DeepMind was acquired by Google, it had an annual budget of approximately £l billion a year, which gives a view of the scale of investment required. Can the Minister in summing up say what plans there are to scale up the Government’s ambitions in this area?

Finally, the Government’s recent White Paper outlines a pretty sensible approach in balancing management of the risks and opportunities of the technology, but as I said at the start, there are areas where it has perhaps been overtaken by events, in a field that is moving at breakneck speed—and that speed is the problem here. Unlike climate change, the full effects of which will not manifest over decades, or even centuries, AI is developing at an incredible pace. We therefore need to start thinking immediately about the initial regulatory frameworks. The Government could consider as a minimum putting their five principles in the White Paper on a statutory footing in the near term to provide regulators with enhanced powers to address the risks of AI.

Here is the bones of an AI Bill, perhaps legislating to set up a new research organisation, providing regulators with the right initial powers and the funding to sit behind all of this, which would at the same time build upon the world-leading AI development capabilities we now have in the UK. I beg to move.

16:03
Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I first congratulate the noble Lord, Lord Ravensdale, on securing this debate and on the comprehensive and interesting way he has introduced it. I signed up to speak for two reasons: first, because I thought I might learn something; and secondly, because I thought it would be helpful for me to highlight that the Communications and Digital Select Committee of your Lordships’ House, which I have the great privilege to chair, has recently launched an inquiry into large language models, focusing on how we can capitalise on the opportunities while managing the risks.

I am under no illusion: the latest advances in generative AI are significant, but we must not allow scaremongering about the future to be a distraction from today’s opportunities and risks. In the committee’s view, what is most important at the moment is to separate hype from reality and make a considered assessment of what guardrails and controls are needed now.

When we come back in September, we will take a detailed look at how large language models are expected to develop over the next three years, how well those changes are accounted for by the Government’s White Paper and our existing regulators, and what needs to happen to capitalise on the benefits and address the most pressing risks. That will include close examination of the structure, work and capacity of the regulators and government teams and their ability to deliver on the White Paper’s expectations. We are open for written submissions and are currently inviting witnesses. We intend to hear from a wide range of key players—from the big tech platforms and fast-moving start-ups to academics, industry experts, regulators, government advisers and institutions abroad.

A key part of our work will be to demystify some of the issues and make sure we are not blinded by the rosy outlook that tech firms are proposing or by doom-saying about the imminent collapse of civilisation. I do not know about noble Lords, but I cannot help thinking how convenient it is to the big tech bros that so few people understand what is going on, so we are going to try to change that through our inquiry. This is not just to mitigate anything bad happening that we do not know about, but to make sure that all the power is not concentrated in a few people’s hands and that the many exciting, potential opportunities of this technology are available not only to them.

Some industries are already seriously concerned, and with good reason. Those in the creative sector, particularly news publishers, are worried about intellectual property. The Minister covers IP policy as well as AI and will be aware just how important this issue is. I would be grateful if he updated us on the Intellectual Property Office working group, which is developing government policy so that news organisations, publishers, writers, artists, musicians and everyone else whose creations are being used by the tech firms to develop LLMs can be properly compensated, and commercial terms established that are fair to all.

Content creators are already seeing their work being used to train generative AI models. If studio businesses can get movie scripts, images or computer-generated background artists for free, why would they pay? The strikes in Hollywood are probably just the beginning of the disruption. In my committee’s creative industries report in January, we predicted looming disruption in the sector and called on DCMS to pay more attention. Sadly, we were right, although changes have come much faster than expected.

At the same time, we cannot wish these technologies away, and nor should we—they present massive opportunities too. We may now be at a critical juncture, both in securing UK competitive advantage in the AI race, and in preventing the risk of overmighty tech firms releasing technologies they cannot control. We need to get this right, and fast. I hope my committee’s work will play a role in shaping this debate and informing government policy.

I look forward to hearing much more on AI regulation in the coming months, and I hope the Minister and his colleagues will respond enthusiastically when we invite them to give evidence to our committee.

16:09
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, it is a distinct pleasure to follow the noble Baroness, Lady Stowell of Beeston. I associate myself with her words of commendation and congratulation to the noble Lord, Lord Ravensdale; it is entirely appropriate that this debate be led by someone with the lived experience of an engineer, and in the noble Lord we have found that person.

Mindful of time, I will limit myself to asking a few diagnostic questions with which I hope the Minister will engage. I believe that they raise issues which are essential to harnessing the benefits of AI, if it is to be done in a manner which is both sustainable and enjoys public consent and trust. Using the incremental process of legislation to govern a technology characterised by chronic exponential technological leaps is not easy. Though tempting, the answer is not to oscillate between the poles of a false dichotomy, with regulatory rigour on one side and innovation on the other. Like climate change, AI is a potentially existential risk that is chartered by ever deepening scientific understanding, emerging opportunity and emerging demonstrable risks.

It is not always true that an absence of government means liberation for business or innovation, especially when business and innovation know that more comprehensive regulation is on the horizon. Clear signals from the Government of regulation, even in advance of that legislation, will not inhibit the AI sector but give it greater confidence in planning, resourcing and pursuing technological advances. My first question is: given that the Prime Minister last month announced his intention for the UK to play a role in leading the world in AI regulation, how does he plan to shape an international legal framework when our own is still largely hypothetical? When do the Government plan to devote parliamentary time to bringing forward some instrument or statement which will deal squarely with the future direction of domestic AI regulation? The President of the United States seems to be doing this these days to ensure, in his own words, that

“innovation doesn’t come at the expense of Americans’ rights and safety”.

I am mindful too of machinery of government issues. Like climate change, AI cuts across apparently discrete areas and will have consequences for all areas of government policy-making. Of course, as a member of the AI in Weapons Systems Select Committee of your Lordships’ House, I am conscious that the ethical implications of AI for national defence are sparking great concern. But, as the Government’s White Paper made clear, we envisage a role for AI in everything, from health and energy policy to law enforcement and intelligence gathering. It is therefore imperative that the Government establish clear lines of accountability within Whitehall so that these intersections between discrete areas of policy-making are monitored and only appropriate innovation is encouraged.

Briefings we all received in anticipation of this debate highlight growing concern over the lack of transparency and accountability about the existing use of AI in areas such as policing and justice, with particular emphasis on pursuing alleged benefit fraud. The Dutch example should be a lesson to us all.

I should be grateful if the Minister would describe how the current formal structures interact, as well as the degree to which No. 10 provides a central co-ordinating role. As the AI Council recedes from view and as the Centre for Data Ethics and Innovation’s newly appointed executive director and apparently refreshed board get to grips with how to support the delivery of priorities set out in the Government’s National Data Strategy, my second question to the Minister is whether is he feels that the recommendation in the recent joint Blair-Hague report should be under active consideration—especially having the Foundation Model Taskforce report directly to the Prime Minister. That may be a useful step to achieving better co-ordination on AI across government. If not, why not?

In preparing for today, I had the pleasure of tracking the Government’s publications on this issue for the past three years or so. In each of those, they quite rightly emphasise the importance of public trust and consent. From my experience as a member of the AI in Weapons Systems Select Committee, I note that in the first section of the executive summary of the Defence Artificial Intelligence Strategy, the Government’s vision is to be “the world’s most … trusted” organisation for AI in defence. An essential element of that trust, we are told, is that the use of AI-enabled weapons systems will be restricted to the extent of the tolerance of the UK public. The maintenance of public trust and support will be a constant qualification of the principles that will inform the use of AI-enabled systems. As there has never been any public consultation on the defence AI strategy, how will the Government on our behalf determine the limits of the tolerance of the public? My own research has revealed that this is a very difficult thing to measure if it is not informed tolerance or opinion. The Centre for Data Ethics and Innovation’s polling corroborates that. What steps are the Government taking to educate the public so they can have an informed base to decide their individual or collective tolerance or level or trust in the use of AI for any purpose, never mind defence?

16:14
Lord Kakkar Portrait Lord Kakkar (CB)
- View Speech - Hansard - - - Excerpts

My Lords, it is a distinct pleasure to follow the noble Lord, Lord Browne of Ladyton, and to join other noble Lords in congratulating my noble friend Lord Ravensdale on the very thoughtful way in which he introduced this important debate. I declare my interests as chairman of King’s Health Partners, chairman of the King’s Fund and chairman of the Office for Strategic Coordination of Health Research.

There are few areas in national life, the conduct of government and the delivery of public services that will become as dependent on artificial intelligence as that of healthcare. It is that particular area to which I will confine my remarks. We all recognise that there are increasing demands on the delivery of healthcare services through a changing population demographic, a subsequent increased demand on clinical services and a substantial workforce shortage. Of course, with all that increasing demand, there will be the need either for the economy to grow at a substantial rate to be able to provide funding for those services or for us to adopt innovation to deliver those services. One of the important innovations is of course the application of artificial intelligence. We have seen that already in healthcare in the areas of diagnostics, imaging and pathology. It is helping us to deliver high-throughput analysis of scans of pathological samples, and, through the application of algorithms, it is helping us to determine better the risk of poor outcomes in patients, to improve diagnosis and therefore to improve the efficiency of our service.

However, there are also substantial challenges. The development of artificial intelligence modalities requires access to high-quality data, and in healthcare we know that data are fragmented across the system through the use of different methods for the collection and creation of them. As a result, unless we have a single approach to the collection of data, we run a substantial risk that the data used to develop and to train AI systems will be inaccurate, and, as a consequence, inaccuracy will be translated into the provision of clinical services. That will potentially drive discrimination in those services, whereby the data on underrepresented populations are not sufficiently incorporated into the development of such technologies and tools.

Successive Governments have had great difficulty in establishing the social licence that will allow for the broad collection and use of those data to drive research, technology and innovation opportunities in healthcare. The whole data area will be one of the most important regulatory challenges in the safe and effective development of AI technologies in healthcare. Does the Minister believe that His Majesty’s Government are at a place now where they can secure access to those data to drive these important opportunities? If not, how will they drive the data revolution in such a way that the public, more generally, are confident that those data will be used broadly for this purpose and other research and development purposes?

The MHRA in 2020 defined the regulatory pathway for the adoption of AI technologies as one very much mirroring those for medical devices. Clearly, some years have passed since that important approach to the regulation of AI was first established. The rigour with which the development of devices, or the regulatory supervision of the development of devices, is applied is slightly different to that for other therapeutic innovations. Is the Minister content that, in pursuing a pathway of regulatory development that is based on the device pathway—which is predominantly risk-based; that is reasonable—and looks at the safety and performance of these applications, there will be sufficient regulatory rigour to provide public confidence?

Regulatory elements of that pathway must not only include an understanding of the source of the data used to develop these technologies but provide a requirement for transparency in terms of an appropriate understanding of what forms the basis of the AI application. That is so that there can be a proper clinical understanding of the appropriateness of that application and of how it can be applied in the broader context of what must be understood about patients and their broader circumstances to reach an appropriate clinical decision.

Beyond that, there are also substantial concerns about ethical considerations in terms of both data privacy and the questions asked to train a clinical application being properly grounded in the modern ethics of healthcare delivery. Is the Minister content that the current regulatory pathway is sufficient and what steps are proposed by His Majesty’s Government to continue to develop these regulatory pathways so that they keep pace with the important advances and therefore benefits that will be derived from AI in healthcare?

16:20
Lord Houghton of Richmond Portrait Lord Houghton of Richmond (CB)
- View Speech - Hansard - - - Excerpts

It is pleasure to follow the noble Lord, Lord Kakkar, and I thank the noble Lord, Lord Ravensdale, for scheduling this most timely debate. I draw attention to my relevant interests in the register, specifically my advisory roles with various tech companies—notably Tadaweb, Whitespace and Thales—and my membership of the House of Lords AI in Weapon Systems Committee.

It is as a result of my membership of that committee that I am prompted to speak, but I emphasise that the committee, although it has collected a huge amount of evidence, is still some way off reaching its final conclusions and recommendations. So today I speak from a purely personal and military perspective, not as a representative of the committee’s considered views. I want to make a few broad points in the context of the regulation of artificial intelligence in weapon systems.

First, it is clear to me at least, from the wide range of specialist evidence given to our committee, that much of that evidence is conflicted, lacks consensus or is short of factual support. This is especially true of the technology, the capability of which is mostly concerned with future risk rather than current reality. Secondly, it is reasonably clear that, although there is no perfect equilibrium, there are as many benefits to modern warfare from artificial intelligence as there are risks. I think of such things as greater precision, less collateral damage, speed of action, fewer human casualties, less human frailty and a greater deterrent effect—but this is not to deny that there are significant risks. My third general point is that to deny ourselves the potential benefit of AI for national military advantage in Armed Forces increasingly lacking scale would surely not be appropriate. It will most certainly not be the course of action that our enemies will pursue, though they may well impel us to do so through devious means.

My own view, therefore, is that the sensible policy approach to the future development of AI in weapon systems is to proceed but to do so with caution, the challenge being how to satisfactorily mitigate the risks. To some extent, the answer is regulation. The form that might take is up for further debate and refinement but, from my perspective, it should embrace at least three main categories.

The first would be the continued pursuit of international agreement or enhancements to international law or treaty obligations to prevent the misuse of artificial intelligence in lethal weapon systems. The second would be a refined regulatory framework which controlled the research, development, trials, testing and, ultimately—when passed—the authorisation of AI-assisted weapon systems prior to operational employment. This could be part of a national framework initiative.

As an aside, I think I can say without fear of contradiction that no military commander—certainly no British one—would wish to have the responsibility for a fielded weapon system that made autonomous judgments through artificial intelligence, the technology and reliability of which was beyond human control or comprehension.

The third area of regulation is on the battlefield itself. This is not to bureaucratise the battlefield. I think I have just about managed to convince a number of my fellow committee members that the use of lethal force on operations is already a highly regulated affair. But there would need to be specific enhancements to the interplay between levels of autonomy and the retention of meaningful human control. This is needed both to retain human accountability and to ensure compliance with international humanitarian law. This will involve a quite sophisticated training burden, but none of this is insurmountable.

I want to finish with two general points of concern. There are two dangerous and interlinked dynamics regarding artificial intelligence and the nature of future warfare. Together, they need us to reimagine the way future warfare may be, and arguably already is being, conducted. Future warfare may not be defined by the outcome of military engagement in set-piece battles that test the relative military quality of weapons, humans and tactics. The desirability of risking the unpredictability of crossing the threshold of formalised warfare may cause many people, including political leaders, to think of alternate means of gaining international competitive advantage.

The true dangers of artificial intelligence, in a reimagined form of warfare that is below the threshold of formalised war, lie in its ability to exploit social media and the internet of things to radicalise, fake, misinform, disrupt national life, create new dependencies and, ultimately, create alternate truths and destroy the democratic process. By comparison with the task of regulating this reimagined form of warfare, the regulation of autonomous weapon systems is relatively straightforward.

16:26
Lord Bishop of Oxford Portrait The Lord Bishop of Oxford
- View Speech - Hansard - - - Excerpts

My Lords, I declare my interest as a member of two recent select and scrutiny committees on AI, and as a founding board member of the Centre for Data Ethics and Innovation.

Together with others, I congratulate the noble Lord, Lord Ravensdale, on this debate, and it is a pleasure to follow the noble and gallant Lord, Lord Houghton.

We are at a pivotal moment in the development of AI. As others have said, there is immense potential for good and immense potential for harm in the new technologies. The question before us is not primarily one of assessing risk and developing regulation. Risk and regulation must both rest on the foundation of ethics. My fundamental question is: what is the Government’s view on the place of ethics within these debates, and the place of the humanities and civil society in the development and translation of ethics?

In 1979, Pope John Paul II published the first public document of his papacy, the encyclical Redemptor Hominis. He drew attention to humanity’s growing fear of what humanity itself produces—a fear revealed in much recent coverage of AI. Humanity is rightly afraid that technology can become the means and instrument for self-destruction and harm, compared with which all the cataclysms and catastrophes of history known to us seem to fade away. Pope John Paul II goes on to argue that the development of technology demands a proportional development of morals and ethics. He argues that this last development seems, unfortunately, to be always left behind.

Professor Shoshana Zuboff made a similar point on this time lag more recently, in her book The Age of Surveillance Capitalism. She wrote:

“We have yet to invent the politics and new forms of collaborative action … that effectively assert the people’s right to a human future”.


These new structures must be developed not by engineers alone but in rich dialogue across society. Society together must ask the big ethical questions. Will these new technologies lead us into a more deeply humane future and towards greater equality, dignity of the person and the creative flourishing of all? Or will they lead us instead to a future of human enslavement to algorithms, unchallenged bias, still greater inequalities, concentration of wealth and power, less fulfilling work and a passive consumerism?

Five years ago the Government established the Centre for Data Ethics and Innovation to explore these questions. The centre began well but was never established on an independent legal footing and seems to have slipped ever further from the centre of the Government’s thinking and reflection. One of the hopes of the CDEI was that it would provide an authoritative overview of sector-led innovation, have a co-ordinating and oversight role, and be a place for bringing together public engagement, civil society, good governance and technology. I therefore ask the Minister: what are the Government’s plans for the CDEI in the current landscape? What are the plans for the engagement of civil society with AI regulation and ethics, including with the international conference planned for the autumn? Will the Government underline their commitment to the precautionary principle as a counterweight to the unrestrained development of technology, because of the risks of harm? Will we mind the widening gap between technology and ethics for the sake of human flourishing into the future?

16:30
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- View Speech - Hansard - - - Excerpts

My Lords, it is a great pleasure to follow the right reverend Prelate. I declare my interest as a member of the AI in Weapon Systems Committee. I very much thank the noble Lord, Lord Ravensdale, for choosing for debate a subject that arguably now trumps all others in the world in importance. I also owe a debt of gratitude to a brilliant young AI researcher at Cambridge who is researching AI risk and impacts.

I could, but do not have the time to, discuss the enormous benefits that AI may bring and some of the well-known risks: possible extreme concentration of power; mass surveillance; disinformation and manipulation, for example of elections; and the military misuse of AI—to say nothing of the possible loss, as estimated by Goldman Sachs, of 300 million jobs globally to AI. Rather, in my five minutes I will focus on the existential risks that may flow from humans creating an entity that is more intelligent than we are. Five minutes is not long to discuss the possible extinction of humanity, but I will do my best.

Forty years ago, if you said some of the things I am about to say, you were called a fruitcake and a Luddite, but no longer. What has changed in that time? The changes result mainly from the enormous development in the last 10 years of machine learning and its very broad applicability—for example, distinguishing images and audio, learning patterns in language, and simulating the folding of proteins—as long as you have the enormous financial resources necessary to do it.

Where is all this going? Richard Ngo, a researcher at OpenAI and previously at DeepMind, has publicly said that there is a 50:50 chance that by 2025 neural nets will, among other things, be able to understand that they are machines and how their actions interface with the world, and to autonomously design, code and distribute all but the most complex apps. Of course, the world knows all about ChatGPT.

At the extreme, artificial systems could solve strategic-level problems better than human institutions, disempower humanity and lead to catastrophic loss of life and value. Godfathers of AI, such as Geoffrey Hinton and Yoshua Bengio, now predict that such things may become possible within the next five to 20 years. Despite two decades of concentrated effort, there has been no significant progress on, nor consensus among AI researchers about, credible proposals on the problems of alignment and control. This led many senior AI academics—including some prominent Chinese ones, I emphasise—as well as the leaderships of Microsoft, Google, OpenAI, DeepMind and Anthropic, among others, recently to sign a short public statement, hosted by the Center for AI Safety:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


In other words, they are shouting “Fire!” and the escape window may be closing fast.

As a result, many Governments and international institutions, such as the UN and the EU, are suddenly waking up to the threats posed by advanced AI. The Government here are to host a global AI safety summit this autumn, apparently, but Governments, as some have said, are starting miles behind the start line. It will be critical for that summit to get the right people in the room and in particular not to allow the tech giants to regulate themselves. As Nick Bostrom wrote:

“The best path towards the development of beneficial superintelligence is one where AI developers and AI safety researchers are on the same side”.


What might the shape of AI regulation look like? Among other things, as the noble Lord, Lord Ravensdale, said, Governments need to significantly increase the information they have about the technological frontiers. The public interest far outweighs commercial secrecy. This challenge is international and global; AI, like a pandemic, knows no boundaries.

Regulation should document the well-known and anticipated harms of societal-scale AI and incentivise developers to address these harms. Best practice for the trustworthy development of advanced AI systems should include regular risk assessments, red teaming, third-party audits, mandatory public consultation, post-deployment monitoring, incident reporting and redress.

There are those who say that this is all unwarranted scaremongering, as some have touched on this afternoon, and that “there is nothing to see here”. But that is not convincing because those people—and they know who they are—are transparently just talking their own commercial and corporate book. I also pray in aid the following well-known question: would you board an aeroplane if the engineers who designed it said it had a 5% chance of crashing? Some, such as Eliezer Yudkowsky, say that we are already too late and that, as with Oppenheimer, the genie is already out of the bottle; all humanity can do is to die with dignity in the face of superhuman AI.

Nevertheless, there are some very recent possible causes for hope, such as the just-announced White House voluntary commitments by the large tech companies and the Prime Minister’s appointment of Ian Hogarth as the chair of the UK Government’s AI Foundation Model Taskforce.

For the sake of humanity, I end with the words of Dylan Thomas:

“Do not go gentle into that good night …

Rage, rage against the dying of the light”.

Given that we are starting miles behind the start line, I refer to Churchill’s well-known exhortation: “Action this day”.

16:36
Lord Giddens Portrait Lord Giddens (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, let me join the queue in congratulating the noble Lord, Lord Ravensdale, on his stellar career to date and his excellent speech. He will bring an enormous amount to your Lordships’ House, especially his depth of knowledge in the area we are discussing.

ChatGPT and other forms of generative AI have taken the world by storm. I am a social scientist, but I have spent several years studying AI and we have never seen anything like this: every day around the world, the media has stories about the evolution of AI. This is a quite extraordinary phenomenon.

As we all know, the regulatory problems are huge, and Governments everywhere are scrambling to keep up. Extremely distinguished scientific figures, such as Geoffrey Hinton, just mentioned by the noble Lord, Lord Fairfax, have declared generative AI to be an existential threat to humanity. I do not wholly agree with this; it is awesome that human consciousness might be replicated and even improved on, in some sense, but the whole structure of science depends on social institutions. We cannot simply replace that by machines. I think there is a progressive merging of human beings and intelligent machines, and I do not see much of a way back.

In the meantime, a host of other organisations such as schools and universities, where I work, is struggling to cope with much more mundane problems such as how to ensure that students are actually the authors of the academic work they produce. That does not sound like much, but it is a big issue on the ground.

From its earliest origins, AI has been linked to geopolitics and war. It was created not by Silicon Valley but by ARPA and DARPA, the huge research programmes set up by the US Government in response to Russia’s Sputnik 2, which carried Yuri Gagarin into space. Regulatory efforts today, as in previous eras, are likely to be dislocated on a global level by current geopolitical tensions, especially those involving Russia, China and the West.

The most extensive plan for regulation that I have seen is that developed by the EU. As with our Government’s programme, its aim is to balance opportunity and risk, although there are lots of huge issues still unresolved in all this. I support that ambition but it will be very difficult to achieve, given that the pace and scope of innovation has accelerated off the wall. This time, much of it is driven by digital start-ups rather than the huge digital corporations, and it is genuinely global.

Is there anyone in the Chamber who does not have their smartphone close at hand? Even I have mine in my pocket, although I sort of hate it. The smartphone in your pocket has much more intelligence than many machines that we have created, including those that took human beings to the moon. This is quite extraordinary stuff.

As was announced only about two days ago in the US, ChatGPT will become available to smartphone users across the world. Imagine the impact of that, given that the proportion of the world’s population with access to a smartphone has risen to 84%. It will be absolutely awesome. I make a strong plea to the Government Front Bench to have a far more extensive debate about these issues, because they are so fundamental to a whole range of political, social and economic problems that we face.

16:41
Lord Anderson of Ipswich Portrait Lord Anderson of Ipswich (CB)
- View Speech - Hansard - - - Excerpts

My Lords, machine learning models, most famously AlphaFold, have a well-known role in the discovery of useful drugs. Drugs need to be safe, so open-source toxicity datasets are used to screen new molecules and discard those which are predicted to be toxic—a justly celebrated benefit of artificial intelligence.

On a darker note, suppose that a bad actor wishes to create a new nerve agent. They could take an open-source generative model and set it to work with the same toxicity dataset but with the instruction to seek out, rather than avoid, molecular structures predicted to be toxic. There will be false positives and the molecule, once identified, would still have to be synthesised, but it is now feasible to find multiple previously unknown chemical warfare agents with little more than a computer and an internet connection—as shown in a recent paper published after some agonising, and as footnoted in last month’s thought-provoking Blair/Hague report.

Crimes, as well as threats to national security, can be facilitated by AI techniques. Take high-value spear phishing, historically a labour-intensive enterprise. The diffusion of efficient and scalable AI systems will allow more actors to carry out such attacks, at a higher rate and volume, on targets who can be researched by data extraction attacks or scraping social media and can be more cunningly deceived with the help of speech synthesis systems and fake images. Similar disinformation techniques will no doubt be used by others to diminish our capacity to know what is real, and thus to threaten our democracy.

Democracy is not a suicide pact; accordingly, those who protect us from serious crime and threats to our security must themselves be able to use AI, subject to legal constraints founded on civil liberties and set by Parliament. My independent review of the Investigatory Powers Act, concentrating particularly on the work of the UK’s intelligence community, UKIC, was presented to the Prime Minister in April and quietly published last month. As part of this most timely debate, which I congratulate my noble friend Lord Ravensdale on securing, I will summarise three of its conclusions.

First, as one would hope, UKIC makes use of AI. It underlies existing capabilities such as cyber defence against malicious actors and the child abuse image database. UKIC has for many years employed machine learning automation techniques such as image-to-text conversion, language translation, audio processing and the use of classifiers to pick information of interest out of huge datasets. Models can be trained on labelled content to detect imagery of national security concern, such as weapons, allowing the work of human analysts to be focused on the most promising images. Other techniques of significant potential value include speech to text and speaker identification.

Secondly, UKIC itself, and those entrusted with its oversight, are alert to the ethical dilemmas. IPCO’s Technology Advisory Panel—a body recommended in my bulk powers review of 2016 and ably led by the computer scientist Professor Dame Muffy Calder—is there to guide the senior judicial commissioners who, quite rightly, have the final say on the issue of warrants. The CETaS research report published in May, Privacy Intrusion and National Security in the Age of AI, sets out the factors that could determine the intrusiveness of automated analytic methods. Over the coming years, the focus on how bulk data is acquired and retained may further evolve, under the influence of bulk analytics and AI, towards a focus on how it is used. Perhaps the Information Commissioner’s Office, which already oversees the NCA’s use of bulk datasets, will have a role.

Thirdly, in a world where everybody is using open-source datasets to train large language models, UKIC is uniquely constrained by Part 7 of the Investigatory Powers Act 2016. I found that these constraints—designed with different uses in mind, and comparable to the safeguards on far more intrusive powers such as the bulk interception of communications—impinge in certain important contexts on UKIC’s agility, on its co-operation with commercial partners, on its ability to recruit and retain data scientists, and, ultimately, on its effectiveness. My conclusion was that a lighter-touch regime should be applied, with the consent of a judicial commissioner, to certain categories of dataset in respect of which there is a low or no expectation of privacy. That would require a Bill to amend the IPA. I do not always welcome Home Office Bills, but I hope this one will come sooner rather than later.

16:46
Lord Bilimoria Portrait Lord Bilimoria (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I spoke in the “AI in the UK” debate just over a year ago, on 25 May. At that time, I was the president of the CBI, which I stepped down from in June last year after completing my two-year term. I quoted Susannah Odell, the CBI’s head of digital policy at the time, who said:

“This AI strategy is a crucial step in keeping the UK a leader in emerging technologies and driving business investment across the economy. From trade to climate, AI brings unprecedented opportunities for increased growth and productivity. It’s also positive to see the government joining up the innovation landscape to make it more than the sum of its parts … With AI increasingly being incorporated into our workplaces and daily lives, it’s essential to build public trust in the technology. Proportionate and joined-up regulation will be a core element to this and firms look forward to engaging with the government’s continued work in this area. Businesses hope to see the AI strategy provide the long-term direction and fuel to reach the government’s AI ambitions”.


At that time, I made the same point I have made many times: if we are to achieve this ambition, I do not think we can do it by investing 1.7% of GDP in research, development and innovation compared with the 3.1% and 3.2% that America and Germany do. We need to increase our investment in R&D and innovation by at least 1% of GDP. Does the Minister agree?

Since the mid-1990s—in less than three decades—we have had the internet, dotcom, blockchain, and now we have AI; by the way, hand in hand with AI, quantum is the next big leap. AI is developing at a rapid pace and, since we debated it just over a year ago, there is much more on the agenda, from generative language models such as ChatGPT, to medical screening technology. It is computer vision; it is speech to text and natural language understanding; it is robotics; it is machine learning; I could go on with the amazing capabilities.

The UK Government, in their National AI Strategy, say that AI is the

“fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life”.

Such transformative technology brings both risks and benefits, which we are discussing in this debate.

A point that has not been brought up is that 96% of companies involved in AI are SMEs. Around 75% of those are based in London and the south-east, and the headings are technology, healthcare and science, professional services and financial services. Although 96% of them are SMEs, £7.6 billion in revenue—well over half—is from the large companies. That is, of course, not surprising at all.

If I had the time, I could list all the benefits of AI, from safer cars and transport systems to benefits for businesses and public services, to democracy being made stronger and to crime prevention and defence, as we have heard from the noble and gallant Lord, Lord Houghton. I could list the risks, including a lack of transparency, bias and discrimination, privacy and ethical concerns, security risks, concentration of power, dependence on AI, job displacement, economic inequality, legal and regulatory challenges, an AI arms race, loss of human connection, misinformation and manipulation, and unintended consequences. As the noble Lord, Lord Ravensdale mentioned—I thank him for leading this debate—the existential risk is frightening, to say the least. PWC has said that 7% of jobs in the UK were at high risk of being displaced, but the overall conclusion is that, broadly, it should be neutral. Would the Government reassure us that that will be the case?

There is a call for rapid regulatory adaptation. Sundar Pichai, the CEO of Google, has warned about the potential harms of AI and called for a “suitable regulatory framework”. BCS has issued a report outlining how AI can be helped to “grow up” responsibly. The Russell group of universities—I am chancellor of the University of Birmingham—has said that AI should be used for the

“benefit of staff and students—enhancing teaching practices”,

and that we should not be frightened of it.

I am proud to announce that the University of Birmingham, along with IIT Madras, one of the leading Indian educational institutions, has just announced a joint master’s degree in AI and data science, conducted on both campuses, with the students coming out with a joint degree. This is a first. The report of Sir Tony Blair and the noble Lord, Lord Hague, has been referred to: A New National Purpose: AI Promises a World-Leading Future of Britain.

Collaboration is absolutely crucial, but no one has mentioned this. Can the Minister assure us that the Horizon programme, which is sitting on the Prime Minister’s desk, is going to be activated? The sooner that is done and the sooner we have collaborative research, the more it will help AI to accelerate.

I turn to public trust. AI will be undermined unless the public are informed. What are the Government’s plans to educate the public on AI?

My final point is on labour shortages. We need to activate the labour shortage occupation list to enable us to have access to the talent to actually make the UK a world leader in AI. Will the Government do that?

16:52
Baroness Primarolo Portrait Baroness Primarolo (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, I too congratulate the noble Lord, Lord Ravensdale, on securing this debate. Frankly, the breadth of the contributions thus far shows the urgent need for Parliament to be actively involved and to make sure that those technologies are held accountable. Of course, the big question is how.

In the short time available today I want to touch on another impact of AI: the impact of AI in the workplace and its potential implications for the future of work. I am grateful to Mary Towers of the TUC for the very helpful information she has provided about workplace experience, and I can give only a few of those examples today. The TUC has produced a manifesto, Dignity at Work and the AI Revolution, laying out the values that we should adopt to make sure that technology at work is for the benefit of everyone, and that we should continue to assert the importance of human agency in the face of technological control.

Amid much of the hype and worry about a data-driven transformation of our world, there is, frankly, something missing: the experience of those who are already having their lives changed. Their experience helps us to understand AI and the change it brings to our lives, affecting every relationship and interaction we have, in the workplace, in our families, as consumers and as citizens. Our lives are enmeshed in—and some say dominated by—data. Data is constantly collected about who we are, what we do and the environment in which we live and work.

Forty years ago, academics, medical experts, researchers and civil society came together on a major advance in medical ethics. The challenge they had at that time was embryology and the question of how to bring together science and what our communities believed was acceptable without at the same time stifling the technology and making it unable to deliver the very best of its opportunities. I am not saying that we can do exactly the same now, but it is about bringing together the ethics to underpin the work that is being done. Any noble Lord contributing to this debate must surely be worried that, by the time they sit down, the whole area will have advanced yet again.

The intersection of technology and work has often been a source of conflict and disagreement. The predictions of technologically driven wealth creation are hailed as a route to greater leisure and well-being, but that vision is miles from the experience of those earning £10.50 an hour at the Amazon factory in Coventry. Amazon’s technologically intensive business practices undermine their belief in themselves, holding them to targets that they can never know and governing them using technologies and data that are still developing and are not perfect.

Your Lordships should also look at the experience of Equity, the performers’ union, which has campaigned to stop AI stealing the show, as performers are having their images, voices and likenesses reproduced by the technology. Royal Mail requires its workers to use portable digital assistants. The workers describe it as having a tracking device on them constantly, saying that management simply does not trust them. Many feel that these PDAs are creating a punitive work culture.

How can we get agreement on the future of AI if we do not have the transparency, accountability, debate and discussion to make all our citizens feel that they are to be protected? Whether it is in the context of defence, the home, work or the environment, consent and trust are crucial. This is about evaluation, openness, data, ethical integrity and compliance with human rights.

As the noble Lord, Lord Anderson, said in his very valuable contribution, any regulation has to be founded on civil liberties and be accountable to Parliament, and include the commitment and involvement of our citizens. I cannot tell the Minister how to do this, but I hope your Lordships’ House will make a massive contribution to this very wide debate.

16:57
Lord Chartres Portrait Lord Chartres (CB)
- View Speech - Hansard - - - Excerpts

The noble Baroness has illustrated very eloquently the extent to which we already live in and are totally embraced by a technological system. I add my congratulations to my noble friend on securing this debate at a time when we stand on the brink of a transformation in human affairs every bit as momentous as the beginning of the nuclear age.

The technological system that the noble Baroness talked about has even intervened in matters spiritual. If vicars are scarce, you can get answers to questions from a new generation of IT resources. There is Robo Rabbi; a Polish Catholic competitor called SanTO, the Sanctified Theomorphic Operator; and a feeble Protestant version called BlessU-2. Confronted by technologies beyond the comprehension of the non-expert, some have even been tempted to treat Al pronouncements as semi-divine. Anthony Levandowski, a Silicon Valley engineer, established perhaps the first church of artificial intelligence, called Way of the Future. It may be some consolation to noble Lords to know that the sect has quietly closed down and liquidated its funds.

AI bots are in a totally different league from the technologies we have used in the past. Generative AI can simulate human emotions but cannot reflect on its actions; it has no empathy. Recently, the Australian singer-songwriter Nick Cave was sent a lyric composed by ChatGPT in the style of Nick Cave. His response was very significant; he said that his

“songs arise out of suffering”—

he is a man who has suffered the grievous death of two of his sons. He went on to say:

“Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing”.


AI can simulate human emotions, but it has no spirit. Its advent raises very deep questions. Is there a form of logic, that human beings have not or cannot access, that explores aspects of reality that we have never known or cannot directly know? It seems to me that we can regard it as a tool, a partner or a competitor, and seek to confine it, co-operate with it or possibly even defer to it.

The speed of innovation, to which other noble Lords have alluded, makes a reconsideration of the kind of society we wish to inhabit very urgent. We are close to being able to manipulate human beings exactly as we want them to be—genetically, chemically, electrically —but we do not really know what we want them to be. There is an immense time lag between the advance of AI and our capacity to control it. The educational curriculum is increasingly dominated by technical subjects designed to serve the economy with a certain view of efficiency. That is not wrong, but unless we are careful and if educational opportunities and landscapes are narrowed excessively, human beings in our society will have fewer and fewer places from which to mount a critique of the technological system, which as the noble Baroness says embraces us all.

17:02
Lord Rees of Ludlow Portrait Lord Rees of Ludlow (CB)
- View Speech - Hansard - - - Excerpts

My Lords, the seemingly superhuman achievements of AI are enabled by a greater processing speed and memory storage of computers compared to flesh and blood brains. AI can cope better than humans with data-rich, fast-changing networks—traffic flow, electric grids, image analysis, et cetera. China could have a planned economy of the kind that Mao could have only dreamed of.

However, the societal implications are ambivalent here already. In particular, how can humans remain in the loop? If we are sentenced to a term in prison, recommended for surgery or even given a poor credit rating, we would expect the reasons to be accessible to us and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.

AI systems will become more intrusive and pervasive. Records of our movements, health and financial transactions are in “the cloud”, managed by multinational quasi-monopolies. The data may be used for benign reasons—for instance, medical research—but its availability to internet companies is already shifting the balance of power from governments to globe-spanning conglomerates.

Clearly, robots will take over much of manufacturing and retail distribution. They can supplement, if not replace, many white-collar jobs: accountancy, computer coding, medical diagnostics and even surgery. Indeed, I think the advent of ChatGPT renders legal work especially vulnerable. The vast but self-contained volumes of legal literature can all be digested by a machine. In contrast, some skilled service-sector jobs—plumbing and gardening, for instance—require non-routine interactions with the external world and will be among the hardest to automate.

The digital revolution generates enormous wealth for innovators and global companies, but preserving a humane society will surely require redistribution of that wealth. The revenue thereby raised should ideally be hypothecated to vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few of these, and they are poorly paid, inadequately esteemed and insecure in their positions. However, these caring jobs are worthy of real human beings and are far more fulfilling than the jobs in call centres or Amazon warehouses which AI can usurp. That kind of redeployment would be win-win. However, AI raises deep anxieties; even in the short-term, ChatGPT’s successors will surely confront us, writ large, with the downsides of existing social media: fake news, photos and videos, unmoderated extremist diatribes, and so forth.

Excited headlines this year have quoted some experts talking about “human extinction”. This may be scaremongering, but the misuse or malfunction of AI is certainly a potential societal threat on the scale of a pandemic. My concern is not so much the science-fiction scenario of a “takeover” by superintelligence as the risk that we will become dependent on interconnected networks whose failure—leading to disruption of the electricity grid, GPS or the internet—could cause societal breakdowns that cascade globally.

Regulation is needed. Innovative algorithms need to be thoroughly tested before wide deployment, by analogy with the rigorous testing of drugs which precedes government approval and release. But regulation is a special challenge in a sector of the economy dominated by a few multinational conglomerates. Just as they can move between jurisdictions to evade fair taxation, so they could evade regulations of AI. How best can the UK help to set up an enforceable regulatory system with global range? It is good news that the Government are tackling this challenge already.

Finally, society will be surely transformed by autonomous robots, even though the jury is out on whether they will be “idiot savants” or will display wide-ranging superhuman intelligence—and whether, if we are overdependent on them, we should worry more about breakdowns and bugs or about being outsmarted, more about maverick artificial intelligence than about real stupidity.

17:07
Lord Holmes of Richmond Portrait Lord Holmes of Richmond (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I thank the noble Lord, Lord Ravensdale, for securing this debate and congratulate him on the way he introduced it. I declare my technology interests as set out in the register.

When we come to consider the regulation of AI, it is first worth considering how we define it. There are multiple definitions out there, but when it comes to regulation, it is best not to draw that definition too tightly and perhaps better to concentrate on those outcomes that are intended and the challenges that we are seeking to avoid. Ultimately, AI is just the deployment of data, and it is our data, so a central pillar must be the explainability of how the AI comes to any decision, and how we should choose to regulate to achieve that level of explainability, which should be for the citizen understanding, not just from a software engineer’s perspective. Does the Minister feel that synthetic data offers a number of potential solutions, not least to the privacy questions, and what would the Government consider in terms of how they would go about the QA-ing and indeed the regulation of such synthetic data?

As has already been discussed, in that it is our data, it is right that we, and indeed every citizen, should have a say—should have a piece in this AI play. It will come down to trustworthiness, and everything that the developers, designers and businesses have to do to make this not trusted but trustworthy.

What more do the Government intend to do to have this level of public debate and discourse around such an existential issue? Similarly, does he agree that it would make sense to consider an AI officer on the board of all businesses of a certain size? I put an amendment to this effect down to the then Financial Services and Markets Bill, as AI is obviously already pervasive across our financial services industry. Would it not make sense for the Government to consult on having AI officers on the board of all businesses?

We have already heard a lot about ChatGPT—you cannot go a day without hearing about it—but what about the energy consumption that it took to train ChatGPT and for its continued use? Has my noble friend considered what the Government might wish to conclude on energy consumption of these AIs? Perhaps it would be better if photonic calculus was used, rather than more traditional math, to massively reduce the energy consumption of these systems.

Similarly, if the public are to be enabled, it will take much more than regulation. Does my noble friend agree that we should look at a complete transformation of our education system: data literacy, data competency, digital competency and financial AI literacy through every beat point of the curriculum. Would that not be a good thing for the Government to go out to consult on over the summer and the autumn?

If we are to make a success of AI—and it is in our human hands to do so—it will be only through the engagement and enablement of every citizen in every society, and understanding how to have that innovation in everybody’s human hands. If we stuff this up and it goes wrong, that will be not a failure of the AI or the technology but a human failure: of legislators, regulators, businesses, corporates and all of us.

What are the plans for the summit this autumn? How broadly will people be engaged? What will be the role for civil society at that summit? Finally, can my noble friend set out briefly what he sees as the key differences between the approach of the UK to AI and that of other jurisdictions, not least the European Union?

We can make a success of what we have in front of us, if we are rationally optimistic, understand the risks and step over the huge hype cycle of both unrealistic potential and overly described fears. We need to consider AI as incredibly powerful—but an incredibly powerful tool in our human hands, where we can grip it and make a success economically, socially and politically for all our citizens.

17:13
Viscount Colville of Culross Portrait Viscount Colville of Culross (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I declare an interest as a freelance television producer. I too congratulate my noble friend Lord Ravensdale on having secured this debate.

Last night, I went on to the ChatGPT website and asked it to write me a speech on the subject in this debate that worries me—the threat that AI poses to journalism—and this is a paragraph that it came up with:

“AI, in its tireless efficiency, threatens to overshadow human journalism. News articles can be automated, and editorials composed without a single thought, a single beating heart behind the words. My fear is that we will descend into a landscape where news is stripped of the very human elements that make it relatable, understood, and ultimately, impactful”.


Many noble Lords might agree that that is a frighteningly good start to my speech. I assure them that, from now on, whatever I say is generated by me rather than ChatGPT.

Generative artificial intelligence has many benefits to offer broadcasting and journalism. For instance, the harnessing of its data processing power will allow a big change in coverage of next month’s World Athletics Championships in Budapest by ITN. Not only will it allow the voiceover to be instantly translated, but it will also be able to manipulate the British sports presenter’s voice to broadcast in six to seven languages simultaneously. This will bring cost savings in translation and presenter fees.

Already there is an Indian television service, Odisha TV, which has an AI-generated presenter that can broadcast throughout the night in several Indian languages. Synthetic voice-generated AI has already arrived and is available for free. The technology to manipulate an image so that a speaker’s lips are synchronised with the voice is commercially available, improving and becoming cheaper by the month. All these advances threaten on-screen and journalistic jobs.

However, noble Lords should be concerned by the threat to the integrity of high-quality journalism, an issue raised by my AI-generated introduction. We are now seeing AI accelerating trends in digital journalism, taking them further and faster than we would have thought possible a few years ago. Noble Lords have only to look at what is happening with AI search. At the moment, many of us will search via a browser such as Google, which will present us with a variety of links, but with AI search the information required is given at a quite different level of sophistication.

For instance, when asked, “What is the situation in Ukraine?”, the new Microsoft AI search tool will give an apparently authoritative three-paragraph response. It will have searched numerous news websites, scraped the information from those sites and sorted them into a three-paragraph answer. When the AI search engine was asked for the provenance of the information, it replied that

“the information is gathered from a variety of sources, including news organisations, government agencies and think tanks”.

Requests for more exact details of the news websites used failed to deliver a more specific answer. As a result, it is not possible for the user to give political weight to the information given, nor to discover its editorial trustworthiness. As many other noble Lords have mentioned, the ability to create deepfakes allows AI to synthesise videos of our public figures saying anything, whether true or not. There is nothing in the terms and conditions for the tech companies to ensure that the answers are truthful.

The very existence of quality journalism is at risk. Already we are seeing the newspaper industry brought to its knees by the big tech platforms’ near-monopoly of digital advertising spend. This has greatly reduced the advertising spend of newspapers, on which much of their revenue depends. Social media is aggregating content from established news sites without paying fees proportionate to the expense of gathering news. The effect on the industry is disastrous, with the closure of hundreds of papers resulting in local news deserts, where the proceedings of local authorities and magistrates are no longer reported to the public. The new AI technology is further exacerbating the financial threat to the whole industry. AI-generating companies can scrape for free the information from news websites, which are already facing the increasing costs of creating original journalistic content. Meanwhile, many AI sites, such as Microsoft’s new AI service, are charging up to $30 a month.

I have been involved in the Online Safety Bill, which has done a wonderful job, with co-operation from the Government and all Benches, to create a law to make the internet so much safer, especially for children. However, it does not do anything to make the internet more truthful. There needs to be pressure on the generative AI creators to ensure that information that they are giving is truthful and transparent. The leaders of our public service media organisations have written to Lucy Frazer, asking her to set up a journalist working group on AI bringing together the various stakeholders in this new AI world to work out a government response. The letter is addressed to DCMS but I would be grateful if the Minister, whose portfolio covers AI policies, could ensure that his department takes an active role in setting up this crucial group.

An election is looming on the horizon. The threat of a misinformation war will make it difficult for voters properly to assess policies and politicians. This war will be massively exacerbated by search AI. It is time for the Government to take action before the new generation of information technology develops out of control.

17:18
Lord Watson of Wyre Forest Portrait Lord Watson of Wyre Forest (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, in the rather stressful five years that I spent as deputy leader of the Labour Party, I enriched myself and maintained an equilibrium by reading the works of some of the world’s great technologists. I was struck by two very powerful ideas, which is why I congratulate and thank the noble Lord, Lord Ravensdale, on this very important debate today.

The first is the idea that, contrary to 2,000 years of conventional wisdom, technological advance is exponential, not linear. The second is the idea of the technological singularity: a hypothetical point in time where technological advance becomes uncontrollable and irreversible. The context that you put your life in when you realise the enormity of those ideas got me through most Shadow Cabinet meetings, but it also allows me to contribute a couple of things to the security discussion that we are having today.

The first of these—I note the caveats of the noble and gallant Lord, Lord Houghton of Richmond, about there being contrary views—is that the singularity is no longer hypothetical but inevitable. The second is that, as the noble and right reverend Lord, Lord Chartres, said, AI can enhance human life beyond our imagination. It can prolong our lives, eliminate famine and reduce illness; it might even reverse global warming. However, if infused with the consciousness of people with dark hearts—the autocrats, the totalitarians, the nihilists—it could destroy us.

When addressing these security threats, there are a couple of things that we have to understand. Unavoidable and sad, the situation in which we find ourselves is that we are currently in an international prisoner’s dilemma. The UK has to maintain investment in areas such as cybersecurity and R&D towards our own sovereign quantum computing capacity. I also think that autonomous machines are probably as inevitable as the next pandemic and our preparedness for that has to be as urgent, and at the same scale, as our current deployment on pandemic management.

We have averted nuclear war for the last 60 years through proper statecraft, political leadership and defence cogency. That context is important here, is it not? Whatever our current disagreements with countries such as China, Russia or Iran, humanity’s interests align when faced with the consequences of uncontrolled AI.

When it comes to economic threats to this country, the situation is much less bleak. Government’s approach to economic growth in the creative industries could be to divide generative and assistive AI when it comes to regulation. Much has been said in recent months on the impact of AI on the music industry, for example, in which I declare an interest as the chair of UK Music. It is important to acknowledge that music has used AI for many years now as an assistive tool. For example, Sir Paul McCartney has recently announced a new Beatles song using AI-based tech to clean up old recordings of John Lennon’s voice. The Apple corporation used AI to create the “Get Back” film on the Beatles, which allowed Sir Paul McCartney to sing a duet with a virtual John Lennon at the Glastonbury festival last year.

The crux of the issue for commerce is consent. With the Beatles examples, permission would have had to be granted from John Lennon’s estate. Consent is that crucial theme that we need to enshrine in AI regulation to protect human creativity. So, in concluding, I ask the Minister to confirm that, at the very least, the Government will rule out any new exceptions on copyright for text and data-mining purposes. Can he do so in this debate?

17:23
Lord St John of Bletso Portrait Lord St John of Bletso (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I join in thanking my noble friend Lord Ravensdale for introducing this topical and very important debate. I was fortunate to be a member of the Lords Select Committee, together with the right reverend Prelate the Bishop of Oxford as well as the noble Lord, Lord Holmes of Richmond, ably chaired by the noble Lord, Lord Clement-Jones, some six years ago. We did a deep dive into the benefits of AI, as well as the risks. One of our conclusions was the necessity for joined-up thinking when it comes to regulation.

There is no denying that AI is the most powerful technology of our times, but many are getting alarmed at the speed of its delivery. It took Facebook four and a half years to get 100 million users; it took Google two and a half years to get 100 million users; but it took ChatGPT just two months.

I particularly welcome its potential for advancing personalised healthcare as well as education. It will also accelerate the deployment of transformational climate solutions and, no doubt, in the bigger picture of the economy it will accelerate a rapid surge in productivity. However, that poses the question of what jobs will be augmented by AI. My simple answer to that is that we have to focus a lot more on upskilling in all SMEs to take account of what AI will have in the future. It is generally accepted that the long-term impact of AI on employment in the UK will be broadly neutral, but the impact of generative AI on productivity could add trillions of dollars in value to the global economy.

I listened yesterday to the podcast “The AI Dilemma”, with leading AI global experts arguing about the potential risk. What I found alarming was that 50% of leading AI researchers believe that there is a 10% or greater chance that humans could go extinct as a result of their inability to control AI. Personally, I do not share that alarmist approach. I do not believe that there is an existential threat while the focus is on narrow AI. General AI, on the other hand, remains a theoretical concept and has not yet been achieved.

On the question of regulation, there have been growing calls for Governments around the world to adapt quickly to the challenges that AI is already delivering and to the potential transformational changes in the future with those associated risk factors. There have been more and more calls for a moratorium on AI development. Equally, I do not believe that that is possible. Regulators need cross-collaboration and cross-regulation to solve the biggest problems. There is also a need for more evidence gathering and case studies.

Public trust in AI is going to be a major challenge. Communication beyond policy is important for private and public understanding. In terms of regulation, the financial services sector’s use of AI is a lot more regulated than any other sector’s. Just as regulators need to focus on addressing safety risks in the use of AI, so the regulators themselves need to be upskilled with the new advances in this technology.

DCMS appears to be taking a light touch in regulating AI, which I welcome. I also welcome initiatives by UKRI to fund responsible AI and an AI task force. However, there needs to be more focus on building ecosystems for different regulators in how to rejuvenate the supply chain of AI. The future is uncertain.

The public sector, as we all know, is likely to be a lot slower to embrace the benefits of AI. By way of example, I am sure that the Department for Work and Pensions could do a lot more with all its data to formulate algorithms for greater efficiency and provide better support to the public in these difficult times. We need a co-ordinated approach across all government departments to agree an AI strategy.

In conclusion, there is no doubt that AI is here to stay and no doubt about the incredible potential it holds, but we need joined-up thinking and collaboration. It is accepted that AI will never be able to replace humans, but humans with AI could potentially replace humans who do not embrace AI.

17:30
Lord Skidelsky Portrait Lord Skidelsky (CB)
- View Speech - Hansard - - - Excerpts

My Lords, everyone taking part in this welcome debate, for which I thank the noble Lord, Lord Ravensdale, is aware of the arrival of ChatGPT. Perhaps not everyone may know that it is the latest stage in the quest, dating back to the Dartmouth conference of 1956, to build a machine capable of simulating every aspect of human intelligence. Current hype claims that ChatGPT will be able to generate content that is indistinguishable from human-created output, automatically producing language, lyrics, music, images and videos in any style possible following a simple user prompt.

University teachers are understandably alarmed. Like the noble Viscount, Lord Colville, a philosopher friend of mine also asked ChatGPT a question. He teaches philosophy at a university. The question was: is there a distinctively female style in moral philosophy? He sent its answer to his colleagues. One found it “uncannily human”. “To be sure”, she wrote,

“it is a pretty trite essay, but at least it is clear, grammatical, and addresses the question, which makes it better than many of our students’ essays”.

She gave it a 2:2. In other words, ChatGPT passes the Turing test, exhibiting intelligent behaviour that is indistinguishable from that of a human being. ChatGPT, we are told, is only a stepping stone to superintelligence.

Other people have been alarmed. On March 22, the Future of Life Institute in Cambridge, Massachusetts, issued an open letter signed by thousands of tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause—a Government-imposed moratorium—on developing AI systems more powerful than ChatGPT. It said:

“AI systems with human-competitive intelligence can pose profound risks to society and humanity”.


The letter goes on to warn of the

“out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

So what is to be done? The Future of Life Institute suggests several possible ways of trying to stop AI going rogue. Its proposals include mandating third-party auditing and certification, regulating access to computational power, creating capable regulatory agencies at the national level, establishing liability for harms caused by AI, increasing funding for safety research and developing standards for identifying and managing AI-generated content. All of these proposals are very sensible and very difficult.

There are two problems. The first is to identify what is good AI and what is bad AI. Numerous codes of conduct for responsible use of AI exist, but they lack binding force. One proposal, developed by the Carnegie Council for Ethics, is for the United Nations to create a global AI observatory, which would monitor good and bad practice and develop a technology passport which all member states could use in devising their own regulation. However, the problem of developing a generally agreed normative framework for the development of AI would still not be solved. I do not quite buy the story of all those repentant Frankensteins.

The second problem is that no state has an incentive to halt the funding of AI developments which will promise it a military advantage. There is a competitive race to develop new killer apps. We are already being told that we must get our AI into space before China. This point was raised by the noble Lord, Lord Giddens. In other words, there can be no pause for reflection if AI development is considered a military race. You do not pause in the middle of such a race; the race itself has to stop. International co-operation is the only way of preventing uncontrollable consequences. We have very little of that at the moment. One thing I am absolutely sure about is that all religious faiths must be involved in any such global conversation.

17:34
Viscount Chandos Portrait Viscount Chandos (Lab)
- Hansard - - - Excerpts

My Lords, I found it hard to know where to start in trying to address the subject of this debate. Professor Geoffrey Hinton, whom other noble Lords have already invoked, has said that

“it's quite conceivable that humanity is just a passing phase in the evolution of intelligence”.

From the existential risk to humanity—not, in my view, to be lightly dismissed as scaremongering—through to the danger of bad actors using AI for bad things, to the opportunity to re-energise productivity growth by responsibly harnessing generative AI, a mind-boggling range of issues are raised by today’s Motion.

The introduction from the noble Lord, Lord Ravensdale, could not have been bettered as a summary and agenda—an encouragement, perhaps, that humanity has not yet been displaced in the hierarchy of intelligence. The AI entrepreneur Jakob Uszkoreit is quoted in today’s Financial Times:

“In deep learning, nothing is ever just about the equations … it’s a giant bag of black magic tricks that only very few people have truly mastered”.


This echoes the question from the noble Lord, Lord Ravensdale. Who among us, in or out of government, really understands this?

Not only is it a black box but, from what we lay people know, it is one that is changing breathtakingly fast, as my noble friend Lady Primarolo has said. Moore’s law, which observed that the number of transistors on a semiconductor doubles every two years, is nothing compared with the pace of AI development. As a foundation model or platform, GPT saw the number of parameters between GPT 1 and GPT 2 increase by 1.3 billion in less than two years. The increase in parameters in a single year between GPT 2 and GPT 3 was over 100 times greater, at 173 billion—as my noble friend Lord Watson said, it is exponential, not linear. As a simplified representation of the speed of AI’s development, this simultaneously indicates the opportunities to harness it for good on the one hand and demonstrates the formidable challenge for regulation on the other.

I am temperamentally an optimist and therefore excited by the positive contribution that advanced AI can make to our world. However, in the time available, I want to focus on two of the challenges: the macro issue of regulation and the more specific question of fake news. In that context, I highlight two of my interests declared in the register. I am a trustee of the Esmée Fairbairn Foundation, which has an endowment with investments in US and other VC funds with holdings in advanced AI companies. I am chair of the Thomson Foundation, which trains journalists, principally in low-income and/or low press freedom countries.

Regulation of advanced AI is inevitably complex. It may be right to focus in the short term on ensuring existing regulators adequately consider the impact of AI. However, in the longer term, there has to be specific overarching regulation, as I think is implied by the Government’s declared aspiration for the UK to be the geographical centre of global AI regulation. Could the Minister say what the Government’s objectives are for the global AI summit being convened for later this year? What arguments have they used for the UK to be the global centre of regulation? When, six years ago, the founder of a Silicon Valley software company described GDPR to me as the de facto global standard for data protection, that reflected the sheer size of the EU market to which it applied. That is not a factor which applies now to the UK’s negotiating position.

Finally, the media, both mainstream and social, are a vital source of information for us all; the integrity of that information lies at the heart of democracy. Your Lordships may have seen the amusing faked photograph of Pope Francis in a fashionable puffer jacket. Other future faked images could be deeply dangerous to stability and security, both globally and, in particular, in low-income countries. Regulation cannot be the only answer to this; education and training are also essential. Can the Minister urge his colleagues in the FCDO to protect and increase funding for media development and journalist training in low-income countries, with a central focus on countering AI-generated fake news, from whatever source?

17:40
Lord Harries of Pentregarth Portrait Lord Harries of Pentregarth (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I congratulate the Prime Minister on the initiative he has taken in the field of AI, but I have very grave concerns about the framework that has been put in place for monitoring it. At the moment, it is far too vague, and, with its stress on innovation, there is a real danger of some of the ethical concerns simply being sidelined.

One major danger of advanced AI is the way it could increase the amount of misinformation in the public realm, as the noble Viscount, Lord Chandos, emphasised. Society exists only on the assumption that most people most of the time are telling the truth, and the Government have the assent of the people only on the basis that what they put forward is, basically, to be trusted. In recent years, as we know, the issues of truth and trust have become critical. People have talked about a post-truth age, in which there is only your truth and my truth. We have conspiracy theorists, with false information being fed into our communication system. I worry, as the noble Lord, Lord Anderson, raised, about forms of artificial intelligence that can mimic public authorities or reputable sources of information; people of ill will could infiltrate all kinds of systems, from government departments to think tanks and university research departments. That is only one danger; there are of course many others.

The Government have stressed that they are taking a pro-innovation approach to AI and do not want to set up a new regulatory body. There is indeed a good reason for that: AI operates very differently in different fields. Obviously, its use in medical diagnosis or research, as the noble Lord, Lord Kakkar, emphasised, is very different from its use in military targeting, as the noble and gallant Lord, Lord Houghton, emphasised. However, what the Government intend to put in place at the moment is far too ill-defined and vague.

The Government have moved to dissolve the AI Council. They have said that it will be replaced by a group of expert advisers, together with the new Foundation Model Taskforce, led by the technology entrepreneur Ian Hogarth, which will, they say, spearhead the adaptation and regulation of technology in the UK. It seems to me that the first and most important function of the task force should be to monitor what is happening and then to alert government about any potential issues.

I am so glad that the noble Baroness, Lady Primarolo, mention the HFEA, of which I was once a member, because it provides an interesting and suggestive model. It too deals with far-reaching scientific advances that raise major ethical questions. To grapple with them, it has a horizon-scanning group composed of leading scientists in the field, whose job is to be aware of developments around the world, which are then reported to a committee to consider any legal and ethical implications arising from them.

In his excellent and well-informed opening speech, my noble friend Lord Ravensdale suggested that research and regulation belong together. I will nuance that slightly by suggesting that, although they of course have to be kept very closely together, they are in fact separate functions. I believe that the new AI task force must, first, have a horizon-scanning function on research, and, in addition, have the capacity to reflect on possible ethical implications. Although the details of that would then have to be put out to the relevant sectors where there are already regulatory regimes, they themselves will need to know what is going on right across the different fields in which AI operates, and then they will need to be able to highlight ethical concerns. My concern is that the pro-innovation approach to AI might lead to the neglect of those functions. To avoid that, we need a clearly set-up central body, the clear focus of which is different from that of innovation and adaptation; it would be to monitor developments and then to raise any ethical concerns.

Such a central body would not, at this stage, need a regulator. However, that time might indeed come. The noble Lord, Lord Fairfax, and many leading figures in the industry feel that the time has come for a new regulator—something perhaps along the lines of the International Atomic Energy Agency. For the moment, I hope the Government will at least give due thought to giving the task force a much clearer remit both to monitor developments across the field and to raise potential ethical concerns.

17:45
Lord Freyberg Portrait Lord Freyberg (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I too add my thanks to the noble Lord, Lord Ravensdale, for securing today’s timely debate. With rapid advancements in artificial intelligence, the possibilities seem boundless, but they also come with potentially significant risks. Like the noble Lord, Lord Kakkar, I will speak to how the opportunities and risks of the development of AI pertain to healthcare.

Machine learning, and more recently deep learning—commonly referred to as AI—have already shown remarkable potential in various fields, and both harbour opportunities to transform healthcare in ways that were previously unimaginable. AI can be used to process vast amounts of medical data, including patient records, genomic information and imaging scans, and to assist doctors in more accurate and timely diagnosis and prognosis. The early detection of diseases and personalised treatment plans can dramatically improve people’s quality of life and help save countless lives. AI can be used to analyse the genetic make-up of patients, and, in time, will better predict how individuals will respond to specific treatments, leading to more targeted and effective therapies, reducing adverse reactions and improving overall treatment success rates.

AI-assisted automation can streamline administrative tasks, freeing up healthcare professionals to focus on direct patient care. That has the potential to improve productivity dramatically in healthcare, as well as patient satisfaction, at a time when waiting lists and workforce shortages are, rightly, giving rise to concerns about their impact on our well-being and the UK economy. AI-powered algorithms can significantly accelerate, and thereby derisk, drug discovery and development, potentially leading to new breakthrough medications for diseases that have remained incurable.

While the promises of AI in healthcare are alluring, we must acknowledge its limitations and the potential risks associated with its development. The use of vast amounts of data to train AI models is bound to raise concerns about data privacy and security. Unauthorised access or data breaches could lead to severe consequences for public trust in new uses of this potentially game-changing technology. The models which underpin AI are only as good as the datasets they are trained on. Bias in the data underpinning AI in healthcare could lead to discriminatory decisions and exacerbate healthcare inequalities. Complex algorithms can be challenging to interpret, leading to a lack of transparency in decision-making processes. This opacity is liable to raise questions about accountability and give rise to new ethical considerations. We must ensure that we do not enter trading arrangements which might prevent our being able to assess the efficiency and risks associated with AI development elsewhere for its use in healthcare settings.

Crucially, where risks have the potential to be matters of life or death, we must resist the temptation to underresource pertinent regulators, and we should be mindful of hyperbole in our pursuit of innovation. To harness fully the potential of AI in healthcare while mitigating its risks, comprehensive and adaptive regulatory frameworks are imperative, both at national and international levels. The UK Government, in collaboration with international organisations, should commit to developing common standards and guardrails, by making the most of the global summit on AI safety that they will host in the autumn and contributing to the Hiroshima AI Process established by the G7. Any guardrails should be guided by the precautionary principle and prioritise patient safety, both now and in the future.

AI used in healthcare must undergo rigorous testing and validation to ensure its accuracy, safety and effectiveness. Independent bodies such as the MHRA can oversee this process if they are appropriately resourced, instilling confidence in both healthcare providers and patients. As the noble Lords, Lord Browne, Lord Bilimoria and Lord Holmes, and others said, the public should be involved in shaping AI regulations. While the Government’s AI task force is to be welcomed, it is imperative that civil society be engaged in the development of standards and guardrails applicable to AI in healthcare from the outset. The ongoing development of AI in healthcare harbours immense promise and potential. However, it is crucial that we approach this transformative technology with a careful understanding of its risks and a clear commitment to robust regulation and maintaining public trust. By fostering collaboration, we must usher in a new era of healthcare that is safer and more efficient and delivers improved patient outcomes for all.

17:50
Earl of Devon Portrait The Earl of Devon (CB)
- View Speech - Hansard - - - Excerpts

My Lords, having greedily signed up to all three debates taking place today because all are on topics close to my heart, I will try to keep the contributions short. I probably should have asked AI to write my speeches today—it would have saved time and doubtless made them much better. However, this is all my own work, and it is not wholly in AI’s favour.

I thank the noble Lord, Lord Ravensdale, for raising this issue. The development of advanced AI raises many significant risks and considerable opportunities, as with all technological advances. The technology itself is complicated, poorly understood and thus intimidating. As the noble Lord, Lord Ravensdale, noted, much of the public and policy debate is sensationalist as a result—we fear what we do not understand.

I do not pretend to understand the intricacies of AI engineering, so I will focus my attention on the implications for intellectual property rights. This is an area I know as an IP litigator qualified in the US and the UK, and I have clients in this space. I also note my interests as a member of the IP APPG, which champions the interests of IP rights holders. The APPG engaged with urgency last year following the Government’s announcement that they planned to introduce a new exception to copyright and database rights to permit text and data mining—or TDM—for any purpose. Currently TDM—in effect, the scraping of information from the internet—is permitted without a licence only for the purpose of academic research. The Government proposed to turbocharge the UK’s AI development by broadening this exception to allow TDM for all purposes, commercial or otherwise, irrespective of the rights and views of the authors of that material, in effect, riding a coach and horses through the long-established IP rights of individual creatives in the interests of the AI machines. The dystopian implications were clear.

Thankfully, sense and the APPG prevailed. The Government withdrew that proposal and have begun to investigate alternative solutions with the industry, such as collective licensing and consent that might respect authors’ rights. Can the Minister please give the House an update on those important discussions?

Despite this, since the public launch of multiple large language model AI systems, such as Open AI’s ChatGPT, Stability AI and others, it has become readily apparent that they have been extensively trained or “educated” using copyrighted materials for which no consent has been given. There is a real risk that AI is, in effect, larceny—an industry built on the theft of the personal property of creative authors for commercial ends. The flood of IP infringement cases that have followed would suggest this. I note, for example, Getty Images’ recent UK copyright infringement case against Stability AI’s image generation system and, more famously perhaps, Sarah Silverman’s American case against Open AI and Meta for theft of her written material.

Such is the brazen nature of this conduct that it seems that the creators of AI models consider they have no need to license protected works in training their machines—either because they are outside the jurisdiction or they consider that the “training” of their LLMs is educational. What steps have the Government taken to engage with them on this topic? I understand this is not the view of the Intellectual Property Office, which stated last year:

“Although factual data, trends and concepts are not protected by copyright, they are often embedded in copyright works. Data mining systems copy works to extract and analyse the data they contain. Unless permitted under licence or an exception, making such copies will constitute copyright infringement”.


Does the Minister agree with that statement and therefore that any large language model firms making copies without permission or exception will be infringing copyright?

Finally, can the Minister please update the House as to the Government’s thinking regarding AI inventions? When they reported last June, the Government had no plan to change the UK’s patent law because AI was not “yet” advanced enough to invent without human intervention. Given the rapid developments since, has the Government’s view changed on this point?

I note that the UK is co-ordinating closely with the United States on this issue. In the US in Thaler v Vidal, the Federal Circuit said that AI cannot be an inventor for US patents, but the USPTO has sought fresh evidence on the point. Similarly, the Copyright Office in the US rejected the registration of copyright in AI-generated works but has issued guidance and sought further evidence on the topic. What are the UK Government doing?

These are important issues. The sensitive balancing of the creative rights of the individual versus realising the extensive promise of machine learning is a key challenge for our times. Personally, I hope the humans will prevail.

17:55
Lord Udny-Lister Portrait Lord Udny-Lister (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I also thank the noble Lord, Lord Ravensdale, for enabling this important and timely debate on a subject which I believe is going to dominate our lives for many years to come and is probably the single most important issue in front of us today. In speaking today, I draw the attention of your Lordships’ House to my declared interests in the register.

From the news that we consume, to the way that we bank, and even the way in which some in this Chamber have prepared for their contributions, AI is now an unavoidable, omnipresent and unstoppable factor in our daily lives. The task for legislators—not just here but across the globe—is to strike the right balance between regulating to protect individuals and businesses and trying to ensure that regulation passed by layman legislators does not supress the incredible innovation enabled by experts and free market demands.

It is universally understood that AI will eventually have a greater economic impact than the industrial revolution once did. Indeed, AI presents itself as the greatest economic opportunity in a generation and will be worth hundreds of billions to UK GDP by 2030. We would therefore be foolish to seek any regulation which scuppered the competitive edge that London and the wider United Kingdom have already established when it comes to the safe, ethical, and innovative application of artificial intelligence.

I therefore put it to your Lordships’ House that the Government have got the balance right in adopting a “pro-innovation” approach to AI. Our renowned universities and research institutions in their active collaboration with industry have driven AI development and continue to enhance the UK’s competitiveness on the global stage. I would be interested to know how the Government will facilitate and support more such collaboration in the near future.

Similarly, our financial institutions are at the forefront of utilising the power of AI for growth, and the UK is harnessing the power of AI to develop more efficient public services and enable lifesaving advancements in healthcare and medicine, as we have heard this afternoon. Does my noble friend the Minister agree that in order to remain a global leader in AI we must never adopt the prescriptive and restrictive approach that was taken by the EU in the passage of the EU AI Act? What analysis have the Government undertaken on the impact this EU legislation will have on the many British companies trading in the EU?

The Government should be highly commended for asserting British prowess and leadership in AI. The recent contributions made by the Foreign Secretary at the UN Security Council last week and the Government’s announcement that the UK will host the first global summit on artificial intelligence show not only that they are taking AI seriously but that they are paving the way for the UK to be a globally significant force for good when it comes to realising the opportunities and confronting the challenges of AI.

As a member of the International Agreements Committee, I turn now to the impact of artificial intelligence on international trade. I welcome the UK’s accession to the CPTPP and the other recently secured trade deals, but I query whether, when it comes to our trading negotiations, we are currently giving enough attention to the implications that AI will have on areas such as regulatory co-operation, digital trade and wider workforce issues. In his summing up today, could my noble friend the Minister reassure the House that, beyond creating and promoting an economy that supports AI development, our negotiation teams are poised to not only embrace the opportunities of AI but actively identify the future threats and challenges that it could pose to the security of our trading relations?

I am conscious of time. I ask the Government to adapt proportionately to the golden thread of future AI regulations, and to be ever mindful of the burden that regulation causes for industry and for the businesses that underpin our economy.

I am confident that if the Government continue along the same path, acting as an enabler, an alliance builder and an industry-informed safeguarder, AI will be an unparalleled opportunity for growth and the making of a modern Britain.

18:01
Lord Londesborough Portrait Lord Londesborough (CB)
- View Speech - Hansard - - - Excerpts

My Lords, first, I salute my noble friend Lord Ravensdale for securing this much-overdue debate. AI is a huge and challenging subject, so my focus today will be limited to its potential economic impact. I refer to my interests as set out in the register as an adviser and investor in start-ups and early-stage ventures.

I must confess that, like the noble Viscount, Lord Colville, I was briefly tempted to outsource my AI speech to a chatbot to see if anybody noticed. I tested two large language models; within seconds, both delivered 500-word speeches, which were credible if somewhat generic. AGI—artificial general intelligence —will soon be able to write my speeches in my personal style, having scraped Hansard, and deliver it in my voice through natural language understanding, having analysed and processed my speeches on parliamentlive.tv, and with no hesitation, repetition or deviation.

Is it an exciting or alarming prospect that your Lordships might one day be replaced by “Peerbots” with deeper knowledge, higher productivity and lower running costs? This is the prospect for perhaps as many as 5 million workers in the UK over the next 10 years. That said, the UK economy is in dire need of AI to address low productivity and growth, and critical capacity constraints, most notably our workforce. The economy model of adding millions of low-skilled jobs, or making people work longer hours, is not sustainable. We have an ageing population, a shrinking workforce, record numbers of long-term sick and a health sector in perpetual crisis with unprecedented waiting lists. We need a qualitative, not quantitative, approach to economic growth and AI could play a critical role.

The UK’s productivity has been in the doldrums for almost 20 years, with output per hour well down on the levels in Germany, France, the US and many other countries. Forecasts on the economic impact of AI vary wildly, with some forecasting 20% to 30% rises in productivity, set against the disappearance of up to 30% of jobs. It is educated guesswork at this stage. Some predict that AI will lift GDP growth by an additional, but hugely significant, 2% per annum. A word of warning: we had similar expectations with the digital revolution. If you look back over the last 25 years, we have indeed witnessed extraordinary changes both as workers and consumers: the smartphone, e-commerce, automation, video communications, contactless payments, and working from home. However, in the decade leading to the pandemic, when GDP growth averaged a modest 1.8% per annum, 1.2% of that growth came from working longer hours, 0.5% came from capital investment and just 0.1% came from innovation and better working practices.

While AI looks set to have a transformative impact on our working practices, as the digital world has done, the big question remains over the net impact on economic growth. As with the digital economy, the risk is that AI may ultimately lead to a few dominant tech giants with huge market share, and further skew the distribution of wealth.

I appreciate that this will be a global dynamic largely beyond the control of our Government, but I conclude by asking the Minister two questions. First, how will the Government nurture a multiplicity of AI players in the UK rather than a dominant few? Secondly, mindful of the recent cuts in R&D tax credits in this year’s Budget, how will SMEs be incentivised to adopt and invest in AI technology to boost their productivity and competitive edge?

18:06
Viscount Waverley Portrait Viscount Waverley (CB)
- View Speech - Hansard - - - Excerpts

My Lords, the takeaway from me from this informative debate is to ask: where do we go from here? Artificial intelligence is vital to our national interest, in regional prosperity, and for shared global challenges. It should be seen as an instrument that shows us the bigger picture in a vast, complex, and detailed chain over which no single country, or corporate, should ever have overall control. Caution must be assured that every data point, statistical analysis and prediction model be spot on. There may be dire consequences if we are ever reliant on data that is unverified, inaccurate or misleading due to misunderstandings of context or nuances that result in—in the jargon—technological hallucinations. Failing to do so could compromise the whole AI experience.

Got right, however, AI policy and regulation will play a vital role in monitoring compliance, analysing trends and assessing the impact of policies, providing transparency, trust and accountability that will ensure that AI-driven decisions and recommendations produce credible, far-reaching results. It can tell us, for example, where to seek proof of reliability, raise red flags and shed light on the previously invisible interconnection of the global economy by assisting us in understanding the complexities of trade dynamics.

Artificial intelligence is a comparatively unexplored component that can impact international trade. When used to help grow economies in a fast, efficient and fair manner, by pointing to the exact location of risks involved in a long-chain transaction, or a complex supply chain, this is all to the good. There is currently no system, however, that monitors and identifies suspicious global trade patterns, no mapping of complex international trade flows and no overall analysis of international trading patterns across multiple countries. AI can change all that. Nationally built systems in technological and political silos must be avoided to combat these challenges. However, collaborative efforts between nation states will enable a comprehensive understanding of patterns and devising targeted strategies. Collaborative efforts such as Project Perseus bring together technology, finance, and policy to unlock sustainable access for SMEs through data-sharing of accurate data from the real economy related to energy usage and resilience. This is critical for UK stakeholders in the business and banking world.

Some 200 million bills of lading, the documents at the heart of international trade, were recently examined by the International Centre for Trade Transparency and Monitoring, to which I am a party, revealing that 13.6% of these documents included at least one substantive inaccuracy. Such mistakes can quickly spread through a supply chain, posing risks that can have far-reaching effects on the economy, covering up inappropriate trade practices such as dumping, counterfeit or sanctions avoidance. Results that are 1 degree out skew detailed analysis.

ESG reporting is also becoming the new norm for companies to communicate their environmental, social and governance credentials to the markets. Interoperability affords legal protection and a process that safeguards SMEs and banks alike. AI also has great value in the prevention and detection of crime, especially fraud. When applied, it can save many staff hours and be a driver to get to the heart of the crime. What took a human researcher two days now takes an hour.

Before we launch into creating further uses of data, it is essential to ensure that government and industry have governance right. To underline all this, we need look no further than to a notice distributed to all Members in Parliament from the parliamentary digital department, informing us that

“generative AI tools are susceptible to bias and misinformation”.

We have the frameworks and processes in place to deliver success, but the time for theory is over.

18:10
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - - - Excerpts

My Lords, I remind the House of my relevant interests in the register. We are all indebted to the noble Lord, Lord Ravensdale, for initiating this very timely debate and for inspiring such a thought-provoking and informed set of speeches today. The narrative around AI swirls back and forwards in this age of generative AI, to an even greater degree than when our AI Select Committee conducted its inquiry in 2017-18—it is very good to see a number of members of that committee here today. For instance, in March more than 1,000 technologists called for a moratorium on AI development. This month, another 1,000 technologists said that AI is a force for good. As the noble Lord, Lord Giddens, said, we need to separate the hype from the reality to an even greater extent.

Our Prime Minister seems to oscillate between various narratives. One month we have an AI governance White Paper suggesting an almost entirely voluntary approach to regulation, and then shortly thereafter he talks about AI as an existential risk. He wants the UK to be a global hub for AI and a world leader in AI safety, with a summit later this year, which a number of noble Lords discussed.

I will not dwell too much on the definition of AI. The fact is that the EU and OECD definitions are now widely accepted, as is the latter’s classification framework, but I very much liked what the noble and right reverend Lord, Lord Chartres, said about our need to decide whether it is tool, partner or competitor. We heard today of the many opportunities AI presents to transform many aspects of people’s lives for the better, from healthcare—mentioned by the noble Lords, Lord Kakkar and Lord Freyberg, in particular—to scientific research, education, trade, agriculture and meeting many of the sustainable development goals. There may be gains in productivity, as the noble Lord, Lord Londesborough, postulated, or in the detection of crime, as the noble Viscount, Lord Waverley, said.

However, AI also clearly presents major risks, especially reflecting and exacerbating social prejudices and bias, the misuse of personal data and undermining the right to privacy, such as in the use of live facial recognition technology. We have the spreading of misinformation, the so-called hallucinations of large language models and the creation of deepfakes and hyper-realistic sexual abuse imagery, as the NSPCC has highlighted, all potentially exacerbated by new open-source large language models that are coming. We have a Select Committee, as we heard today from the noble Lord, Lord Browne, and the noble and gallant Lord, Lord Houghton, looking at the dilemmas posed by lethal autonomous weapons. As the noble Lord, Lord Anderson, said, we have major threats to national security. The noble Lord, Lord Rees, interestingly mentioned the question of overdependence on artificial intelligence—a rather new but very clearly present risk for the future.

We heard from the noble Baroness, Lady Primarolo, that we must have an approach to AI that augments jobs as far as possible and equips people with the skills they need, whether to use new technology or to create it. We should go further on a massive skills and upskilling agenda and much greater diversity and inclusion in the AI workforce. We must enable innovators and entrepreneurs to experiment, while taking on concentrations of power, as the noble Baroness, Lady Stowell, and the noble Lords, Lord Rees and Lord Londesborough, emphasised. We must make sure that they do not stifle and limit choice for consumers and hamper progress. We need to tackle the issues of access to semiconductors, computing power and the datasets necessary to develop large language generative AI models, as the noble Lords, Lord Ravensdale, Lord Bilimoria and Lord Watson, mentioned.

However, the key and most pressing challenge is to build public trust, as we heard from so many noble Lords, and ensure that new technology is developed and deployed ethically, so that it respects people’s fundamental rights, including the rights to privacy and non-discrimination, and so that it enhances rather than substitutes for human creativity and endeavour. Explainability is key, as the noble Lord, Lord Holmes, said. I entirely agree with the right reverend Prelate that we need to make sure that we adopt these high-level ethical principles, but I do not believe that is enough. A long gestation period of national AI policy-making has ended up producing a minimal proposal for:

“A pro-innovation approach to AI regulation”,


which, in substance, will amount to toothless exhortation by sectoral regulators to follow ethical principles and a complete failure to regulate AI development where there is no regulator.

Much of the White Paper’s diagnosis of the risks and opportunities of AI is correct. It emphasises the need for public trust and sets out the attendant risks, but the actual governance prescription falls far short and goes nowhere in ensuring where the benefit of AI should be distributed. There is no recognition that different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent, explainable, accurate and free of bias, whether they are in a regulated or an unregulated sector. Business needs clear central co-ordination and oversight, not a patchwork of regulation. Existing coverage by legal duties is very patchy: bias may be covered by the Equality Act and data issues by our data protection laws but, for example, there is no existing obligation for ethics by design for transparency, explainability and accountability, and liability for the performance of AI systems is very unclear.

We need to be clear, above all, as organisations such as techUK are, that regulation is not necessarily the enemy of innovation. In fact, it can be the stimulus and the key to gaining and retaining public trust around AI and its adoption, so that we can realise the benefits and minimise the risks. What I believe is needed is a combination of risk-based, cross-sectoral regulation, combined with specific regulation in sectors such as financial services, underpinned by common, trustworthy standards of testing, risk and impact assessment, audit and monitoring. We need, as far as possible, to ensure international convergence, as we heard from the noble Lord, Lord Rees, and interoperability of these standards of AI systems, and to move towards common IP treatment of AI products.

We have world-beating AI researchers and developers. We need to support their international contribution, not fool them that they can operate in isolation. If they have any international ambitions, they will have to decide to conform to EU requirements under the forthcoming AI legislation and ensure that they avoid liability in the US by adopting the AI risk management standards being set by the National Institute of Standards and Technology. Can the Minister tell us what the next steps will be, following the White Paper? When will the global summit be held? What is the AI task force designed to do and how? Does he agree that international convergence on standards is necessary and achievable? Does he agree that we need to regulate before the advent of artificial general intelligence, as a number of noble Lords, such as the noble Lords, Lord Fairfax and Lord Watson, and the noble Viscount, Lord Colville, suggested?

As for the creative industries, there are clearly great opportunities in relation to the use of AI. Many sectors already use the technology in a variety of ways to enhance their creativity and make it easier for the public to discover new content, in the manner described by the noble Lord, Lord Watson.

But there are also big questions over authorship and intellectual property, and many artists feel threatened. Responsible AI developers seek to license content which will bring in valuable income. However, as the noble Earl, Lord Devon, said, many of the large language model developers seem to believe that they do not need to seek permission to ingest content. What discussion has the Minister, or other Ministers, had with these large language model firms in relation to their responsibilities for copyright law? Can he also make a clear statement that the UK Government believe that the ingestion of content requires permission from rights holders, and that, should permission be granted, licences should be sought and paid for? Will he also be able to update us on the code of practice process in relation to text and data-mining licensing, following the Government’s decision to shelve changes to the exemption and the consultation that the Intellectual Property Office has been undertaking?

There are many other issues relating to performing rights, copying of actors, musicians, artists and other creators’ images, voices, likeness, styles and attributes. These are at the root of the Hollywood actors and screenwriters’ strike as well as campaigns here from the Writers’ Guild of Great Britain and from Equity. We need to ensure that creators and artists derive the full benefit of technology, such as AI-made performance synthetisation and streaming. I very much hope that the Minister can comment on that as well.

We have only scratched the surface in tackling the AI governance issues in this excellent debate, but I hope that the Minister’s reply can assure us that the Government are moving forward at pace on this and will ensure that a debate of the kind that the noble Lord, Lord Giddens, has called for goes forward.

18:21
Lord Bassam of Brighton Portrait Lord Bassam of Brighton (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, the noble Lord, Lord Ravensdale, is of course to be congratulated —we have all managed to do so—on stimulating such a thought-provoking and thoughtful debate. I observe that this is one of the occasions on which Cross-Bench contributions far outnumber contributions from those of us who have a political label attached. There is a good reason for that: not only do our Cross-Bench colleagues bring independence and expertise, but they also bring knowledge and insight, and we should be very grateful for that, to echo the noble Lord, Lord Clement-Jones.

This debate is indeed timely. Artificial intelligence has gone from theory to reality in a short time, and from marginal to mainstream. Innovation is of course in part being driven by our UK tech start-up and scale-up firms, which my party believes is essential if we are to secure the strongest sustained economic growth in the G7, which we need to secure if we are to meet our nation’s challenges and move away from our current flatlining economy.

A decade ago, most people’s experience of AI was limited to things like the tagging of photographs on social media, whereas now we can unlock our mobile phone with face ID and use AI to generate many forms of content—images, videos, essays, speeches and even data analysis—in a matter of seconds. As the noble Lord, Lord Watson, reminded us, AI can now be used to simulate Beatles hits from the past and create modern duets. The creative sector will benefit greatly from AI.

Impressive as these developments are, they can be considered fairly basic when looking at many new forms of advanced AI, the focus of today’s debate. The Lords Library briefing notes the use of AI in the financial services sector in relation to identifying suspicious transactions, but its application in that sector is even more widespread. Many banks’ mobile apps can categorise purchases to help people track how much they are spending on food or fashion and push personalised offers for supplementary products such as credit cards or insurance. If you need help, you will likely encounter an AI assistant. I can even use an AI assistant to help me choose where best to sit in my local football club’s football stadium.

There is no doubt that these tools are having a positive impact in many realms, making previously complex processes more straightforward. Firms and public services are benefiting, with AI technologies improving the identification of certain types of cancer, the safety of transport networks, the handling of spikes in demand for energy and so on.

Labour believes that AI can be used for even greater good. The noble Lord, Lord Kakkar, made a compelling argument regarding the NHS and the healthcare sector. We would wish to see new technologies cut waiting lists, better identify a broader range of illnesses and improve diagnostics. In the welfare system, it could be used to personalise jobseekers’ return-to-work plans and spot the fraud and loss that cost the Exchequer billions each year.

However, as noble Lords have observed, with opportunity comes risk. While AI can be put to positive use, we are seeing more and more examples of new technologies having unintended consequences or being deliberately deployed by the dark side in undesirable ways. Many noble Lords have spoken about the risks of AI today, whether those concerns are around bias and discrimination, privacy, security or AI’s impact on jobs and wealth distribution.

We do not need to go far to find evidence of AI being misused, and there are particularly worrying trends in the security realm. AI tools are being used to generate convincing text messages. The noble Lord, Lord Anderson, made a very good argument covering that point. AI can also simulate voice clips which purport to come from loved ones, friends, neighbours or family, leaving people increasingly vulnerable to scams. There is evidence that AI chatbots are being programmed to radicalise young people, which is why the shadow Home Secretary has announced that Labour will criminalise the deliberate training of chatbots to promote terrorism and violence.

During the passage of the Online Safety Bill, the House debated the rollout of AI-generated content and the growth in the metaverse when looking at the latter. It is said that police forces have voiced concern about the scale of misogyny and racism and the potential for child abuse. As noted in the Library briefing and by noble Lords today, the risks of AI extend further still. The rollout of ChatGPT, Bard and other large language models has sparked concerns about the integrity of the education system.

On employment, while some see AI as a means of boosting productivity, others, including the TUC, are understandably nervous about the impact on jobs. Octopus Energy claims that its AI-based customer service system does the work of around 250 people, so the threats are apparent.

These debates have become more pressing in recent months, with warnings from some of the field’s greatest minds that AI is developing at such a pace that its downsides may become unmanageable. The Government have been slow to wake up to the dangers of AI; their strategy did not, in our view, sufficiently address the risks that new technologies bring, and the Prime Minister has had to change tack in recent months. Which way will he go? Where will he end up?

As noble Lords have said, it is risk versus innovation. We need to assess and ensure that we have quality regulation. It is useful that we have health and safety processes, but they alone will not provide the breadth of protection that we need and that consumers and workers alike require. We need to adopt the precautionary principle.

What sort of regulatory framework does the Minister see for the future? The Blair/Hague report set out the parameters of the debate, but where will the Government settle? That has been one of the key questions focused on in this afternoon’s debate.

While we welcome the Prime Minister’s desire to discuss these matters on the international stage, we know that he is not so keen to lead on issues such as climate change, and the UK seems to have taken a step back on that and other issues. It is clear that very serious conversations are needed in the weeks, months and years ahead—discussions with the tech sector, the police, security forces and our key international partners.f

AI will be key to solving some of the most pressing challenges faced by society, but we must ensure that there are appropriate guardrails in place to stop it being exploited in ways that will cause more harm than good. I know the Minister is an AI enthusiast and we do not wish to change that, but we hope he can demonstrate in his response that the Government are fully informed and ready to act.

The benefits of AI are undeniable, but so are the risks. As Members of your Lordships’ House, it is incumbent on us to gain a thorough understanding of AI technology and all its implications. By doing so, we can effectively address the challenges it presents and leverage its potential to the fullest for the betterment of our society. We need to collaborate, engage in research and encourage dialogue with experts, academics and industry leaders. By harnessing knowledge and wisdom, we can navigate the complex landscape of AI development and regulation and, I hope, ensure a future where AI serves as a powerful force for good while safeguarding the interests of all citizens.

18:30
Viscount Camrose Portrait The Parliamentary Under-Secretary of State, Department for Science, Innovation and Technology (Viscount Camrose) (Con)
- Hansard - - - Excerpts

I join all noble Lords in thanking the noble Lord, Lord Ravensdale, for tabling such an important debate on how we establish the right guardrails for AI—surely, as many noble Lords have said, one of the most pressing issues of our time. I thank all noble Lords who have spoken on this so persuasively and impactfully; it has been a debate of the highest quality and I am delighted to take part in it. A great many important points have been raised. I will do my best to address them all and to associate noble Lords with the points they have raised. If I inadvertently neglect to do either, that is by no means on purpose and I hope noble Lords will write to me. I am very keen to continue the conversation wherever possible.

As many have observed, AI is here right now, underpinning ever more digital services, such as public transport apps, and innovations on the horizon, from medical breakthroughs to driverless cars. There are huge opportunities to drive productivity and prosperity across the economy, with some analysts predicting a tripling of growth for economies that make the most of this transformational technology.

However, with rapid advances in AI technologies come accelerated and altogether new risks, from the amplification of unfair biases in data to new risks that may emerge in the made-up answers that AI chatbots produce when there are gaps in training data. Potential risks may be difficult to quantify and measure, as mentioned by many today, such as the possibility of super-intelligence putting humanity in danger. Clearly, these risks need to be carefully considered and, where appropriate, addressed by the Government.

As stated in the AI regulation White Paper, unless our regulatory approach addresses the significant risks caused or amplified by AI, the public will not trust the technology and we will fail to maximise the opportunities it presents. To drive trust in AI, it is critical to establish the right guardrails. The principles at the heart of our regulatory framework articulate what responsible, safe and reliable AI innovation should look like.

This work is supported by the Government’s commitment to tools for trustworthy AI, including technical standards and assurance techniques. These important tools will ensure that safety, trust and security are at the heart of AI products and services, while boosting international interoperability on AI governance, as referenced by the noble Lord, Lord Clement-Jones. Initiatives such as the AI Standards Hub and a portfolio of AI assurance techniques support this work.

The principles will be complemented by our work to strengthen the evidence base on trustworthy AI, as noted by the noble Lords, Lord Ravensdale and Lord St John. Building safe and reliable AI systems is a difficult technical challenge, on which much excellent research is being conducted. To answer the point on compute from the noble Lords, Lord Ravensdale and Lord Watson, the Government have earmarked £900 million for exascale compute and AI research resource as of March this year. I agree with the noble Lord, Lord Kakkar, that data access is critical for the UK’s scientific leadership and competitiveness. The National Data Strategy set out a pro-growth approach with data availability and confidence in responsible use at its heart.

I say to my noble friend Lord Holmes that, while synthetic data can be a tool to address some issues of bias, there are additional dangers in training models on data that is computer-generated rather than drawn from the real world. To reassure the noble Lord, Lord Bilimoria, this Government have invested significant amounts in AI since 2014—for the better part of 10 years; some £2.5 billion, in addition to the £900 million earmarked for exascale compute that I have mentioned, in specific R&D projects on trustworthy AI, including a further £31 million UKRI research grant into responsible and trustworthy AI this year.

The Foundation Model Taskforce will provide further critical insights into this question. We have announced £100 million of initial funding for the Foundation Model Taskforce and Ian Hogarth, as its chair—to address the concerns of the noble Lord, Lord Browne—will report directly to the Prime Minister and the Technology Secretary. Linked to this, I thank my noble friend Lady Stowell of Beeston for raising the Communications and Digital Committee’s inquiry into large language models, which she will chair. This is an important issue, and my department will respond to the inquiry.

As the Prime Minister has made clear, we are taking action to establish the right guardrails for AI. The AI regulation White Paper, published this March, set out our proportionate, outcomes-focused and adaptable regulatory framework—important characteristics noted by many noble Lords. As the noble Lord, Lord Clement-Jones, noted, our approach is designed to adapt as this fast-moving technology develops and respond quickly as risks emerge or escalate. We will ensure that there are protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs and bold new discoveries that radically improve people’s lives.

The right reverend Prelate the Bishop of Oxford and my noble friend Lord Holmes raised points about ethics and accountability. Our approach is underpinned by a set of values-based principles, aligned with the OECD and reflecting the ethical use of AI through concepts such as fairness, transparency and accountability. To reassure the noble and right reverend Lord, Lord Harries, we are accelerating our work to establish the central functions proposed in the White Paper, including horizon-scanning and risk assessment. These central functions will allow the Government to identify, monitor and respond to AI risks in a rigorous way—including existential risks, as raised by my noble friend Lord Fairfax, and biosecurity risks, referred to by the noble Lord, Lord Anderson.

We recognise the importance of regulator upskilling and co-ordination, as noted by several noble Lords. Our central functions will support existing regulators to apply the principles, using their sectoral expertise, and our regulatory sandbox will help build best practice.

I thank noble Lords for their emphasis on stakeholder inclusion. We made it clear in the White Paper that we are taking a collaborative approach and are already putting this into practice. For example, to answer the inquiry of the noble Lord, Lord Bilimoria, I am pleased that the White Paper sets out plans to create an education and awareness function to make sure that a wide range of groups are empowered and encouraged to engage with the regulatory framework.

In addition to meetings with the major AI developers—the multinational conglomerates noted by the noble Lord, Lord Rees—the Prime Minister, the Technology Secretary and I have met British-based AI start-ups and scale-ups. We heard from more than 300 people at round tables and workshops organised as part of our recent consultation on the White Paper, including civil society organisations and trade unions that I was fortunate enough to speak with personally. More than 400 stakeholders have sent us written evidence. To reassure the right reverend Prelate the Bishop of Oxford, we also continue to collaborate with our colleagues across government, including the Centre for Data Ethics and Innovation, which leads the Government’s work to enable trustworthy innovation, using data and AI to earn public trust.

It is important to note that the proposals put forward in the White Paper work in tandem with legislation currently going through Parliament, such as the Online Safety Bill and the Data Protection and Digital Information Bill. We were clear that the AI regulation White Paper is a first step in addressing the risks and opportunities presented by AI. We will review and adapt our approach in response to the fast pace of this technology. We are unafraid to take further steps if needed to ensure safe and responsible AI innovation.

As we have heard in this debate, the issue of copyright protection and how it applies to training materials and outputs from generative AI is an important issue to get right. To the several noble Lords who asked for the Government’s view on copying works in order to extract data in relation to copyright law, I can confirm this Government’s position that, under existing law, copying works in order to extract data from them will infringe copyright, unless copying is permitted under a licence or exception. The legal question of exactly what is permitted under existing copyright exceptions is the subject of ongoing litigation on whose details I will not comment.

To respond to my noble friend Lady Stowell and the noble Lords, Lord Watson and Lord Clement-Jones, we believe that the involvement of both the AI and creative sectors in the discussions the IPO is currently facilitating will help with the creation of a balanced and pragmatic code of practice that will enable both sectors to grow in partnership.

The noble Earl, Lord Devon, raised the question of AI inventions. The Government have committed to keep this area under review. As noble Lords may be aware, this issue is currently being considered in the DABUS case, and we are closely monitoring that litigation.

The noble Lord, Lord Rees, and the noble Baroness, Lady Primarolo, raised the important issue of the impact of AI on the labour market. I note that AI has the potential to be a net creator of jobs. The World Economic Forum’s Future of Jobs Report 2023 found that, while 25% of organisations expect AI to lead to job losses, 50% expect it to create job growth. However, even with such job growth, we can anticipate disruption—a point raised by the noble Lord, Lord Bassam, and others.

Many of you asked whether our postgraduate AI conversion courses and scholarship will expand the AI workforce. We are working with partners to develop research to help employees understand what skills they need to use AI effectively. The Department for Work and Pensions’ job-matching pilot is assessing how new technologies like AI might support jobseekers.

To address the point made by the noble Baroness, Lady Primarolo, on workplace surveillance, the Government recognise that the deployment of technologies in a workplace context involves consideration of a wide range of regulatory frameworks—not just data protection law but also human rights law, legal frameworks relating to health and safety and, most importantly, employment law. We outline a commitment to contestability and redress in our White Paper principles: where AI might challenge someone’s rights in the workplace, the UK has a strong system of legislation and enforcement of these protections.

In response to the concerns about AI’s impact on journalism, raised by the noble Viscount, Lord Colville, I met with the Secretary of State for Culture, Media and Sport last week. She has held a number of meetings with the sector and plans to convene round tables with media stakeholders on this issue.

To address the point made by the noble Viscount, Lord Chandos, companies subject to the Online Safety Bill’s safety duties must take action against illegal content online, including illegal misinformation and disinformation produced by AI. I also note that the Online Safety Bill will regulate generative AI content on services that allow user interaction, including using AI chatbots to radicalise others, especially young people. The strongest protections in the Bill are for children: platforms will have to take comprehensive measures to protect them from harm.

The Government are, of course, very aware of concerns around the adoption of AI in the military, as raised by the noble Lord, Lord Browne, and the noble and gallant Lord, Lord Houghton. We are determined to adopt AI safely and responsibly, because no other approach would be in line with the values of the British public. It is clear that the UK and our allies must adopt AI with pace and purpose to maintain the UK’s competitive advantage, as set out in the Ministry of Defence’s Defence Artificial Intelligence Strategy.

On the work the UK is doing with our international partners, the UK’s global leadership on AI has a long precedent. To reassure the noble Viscount, Lord Chandos, the UK is already consistently ranked in the top three countries for AI across a number of metrics. I also reassure the noble Lord, Lord Freyberg, that the UK already plays an important role in international fora, including the G7’s Hiroshima AI Process, the Council of Europe, the OECD, UNESCO and the G20, as well as through being a founding member of the Global Partnership on AI. Our leadership is recognised internationally, with President Biden commending the Prime Minister’s ambition to make the UK the home of AI safety. Demonstrating our leadership, on 18 July the Foreign Secretary chaired the first ever UN Security Council briefing on AI, calling on the world to come together to address the global opportunities and challenges of AI, particularly in relation to peace and security.

To reassure my noble friend Lord Udny-Lister and the noble Lord, Lord Giddens, on the importance of assessing AI opportunities and risks with our international partners, it is clear to this Government that the governance of AI is a subject of global importance. As such, it requires global co-operation. That is why the UK will host the first major global summit on AI safety this year. Some noble Lords have questioned the UK’s convening power internationally. The summit will bring together key countries, as well as leading tech companies and researchers, to drive targeted, rapid international action to guarantee safety and security at the frontier of this technology.

To bring this to a close, AI has rapidly advanced in under a decade and we anticipate further rapid leaps. These advances bring great opportunities, from improving diagnostics in healthcare to tackling climate change. However, they also bring serious challenges, such as the threat of fraud and disinformation created by deepfakes. We note the stark warnings from AI pioneers —however uncertain they may be—about artificial general intelligence and AI biosecurity risks.

The UK already has a reputation as a global leader on AI as a result of our thriving AI ecosystem, our world-class institutions and AI research base and our respected rule of law. Through our work to establish the rules that govern AI and create the mechanisms that enable the adaptation of those rules, the Foundation Model Taskforce and the forthcoming summit on AI safety, we will lead the debate on safe and responsible AI innovation. We will unlock the extraordinary benefits of this landmark technology while protecting our society and keeping the public safe. I thank all noble Lords for today’s really important debate—the insights will help guide our next steps on this critical agenda.

18:47
Lord Ravensdale Portrait Lord Ravensdale (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I thank the Minister for his reply, and thank all noble Lords who have taken part in what has been a most illuminating debate. The debate has achieved exactly what I hoped it would: in no other organisation would we get such a diverse range of knowledge and expertise applied to this question, as the noble Baroness, Lady Primarolo, said. We have touched on: ethics; the effects on society across a whole range of areas including business, healthcare and defence; risks; and regulation, where the noble Lord, Lord Clement-Jones, reminded us that regulation can be key to stimulating innovation and not just a barrier to innovation.

As we went through the debate and the subject of risks came up—particularly the point made by the noble Lord, Lord Fairfax—I was reminded of something my eight year-old child said to me on the subject of AI. After a short discussion we had on the current state of play—the positives and negatives—he said, “We should stop inventing it, Daddy. I think we would be all right”. I think that, sometimes, we should listen to the wisdom of our children and reflect upon it.

Another key aspect was brought up by the noble Lord, Lord Rees, and others, on international collaboration and the need for an enforceable regulatory system with global range. As the noble Lord, Lord Watson, noted, we are presently stuck in something of a prisoner’s dilemma between nations. How do we break that down and find common interests between nations to resolve that? I would go back to the 1955 Russell-Einstein Manifesto”. In the early days of the nuclear age, when we were thinking about the existential risks of nuclear weapons, a key quote from that manifesto which brought together scientists and policymakers to solve that issue was:

“Remember your humanity, and forget the rest”.


I look forward to the autumn conference the Government have organised. I also look forward to the King’s Speech, where I hope to see an AI Bill or at least some concrete steps forward on the topics discussed today. Regardless, going forward, we need to see a much greater input from Parliament on these questions.

Motion agreed.