Main Article

AI Safety Institute will examine, evaluate, and test new types of artificial intelligence

Prime Minister Rishi Sunak has announced that the United Kingdom will establish the world's first AI Safety Institute to examine, evaluate, and test new types of artificial intelligence (AI). Sunak made the announcement during a speech at The Royal Society reflecting on a global responsibility to understand and address the risks surrounding AI to realise its benefits and opportunities for future generations.

Sunak delivered his speech just days before the UK hosts the Global AI Safety Summit at Bletchley Park — the iconic home of computer science. In April, he announced plans to establish the UK government's Frontier AI task force to lead the safe and reliable development of frontier AI models, including generative AI large language models (LLMs) like ChatGPT and Google Bard. It was launched in June and is backed with GBP100 million in funding to ensure sovereign capabilities and broad adoption of safe and reliable foundation models, helping cement the UK’s position as a science and technology superpower by 2030.

Meanwhile, in August, AI was officially classed as a national security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023.

British people deserve peace of mind over advanced protections for AI

"The British people should have peace of mind that we're developing the most advanced protections for AI of any country in the world," Sunak said. "I will always be honest with you about the risks, and you can trust me to make the right long-term decisions."

The AI Safety Institute will assess and study these risks — from social harms like bias and misinformation, through to the most extreme risks of all - so that the UK understands what each new AI model is capable of, Sunak added.

"Right now, we don't have a shared understanding of the risks that we face. Without that, we cannot hope to work together to address them." The UK will therefore push hard to agree to the first ever international statement about the nature of AI risks to ensure that, as they evolve, so does shared understanding about them, Sunak said.

"I will propose that we establish a truly global expert panel to publish a State of AI Science report. Of course, our efforts also depend on collaboration with the AI companies themselves. Uniquely in the world, those companies have already trusted the UK with privileged access to their models. That's why the UK is so well-placed to create the world's first Safety Institute."

UK tech tsar warns of AI cyberthreats posed to NHS

Last month, the UK government's AI tsar Ian Hogarth warned that cybercriminals could use AI to attack the National Health System (NHS). Hogarth, who is the chair of the UK’s Frontier AI task force, said that AI could be weaponized to disrupt the NHS, potentially rivalling the impact of the COVID-19 pandemic or the WannaCry ransomware attack of 2017. He highlighted the risks of AI systems being used to launch cyberattacks on the health service, or even to design pathogens and toxins. Meanwhile, advances in AI technology, particularly in code writing, are lowering the barriers for cybercriminals to carry out attacks, he added.

"The government is quite rightly putting these threats to the very top of the agenda, but technology leaders need to heed the warning and get moving, to better prepare for the next inevitable attack," Hogarth told the Financial Times.

The threats posed by advancing AI technology are fundamentally global risks, Hogarth said. "The kind of risks that we are paying most attention to are augmented national security risks. A huge number of people in technology right now are trying to develop AI systems that are superhuman at writing code. That technology is getting better and better by the day."

In the same way, the UK collaborates with China in aspects of biosecurity and cybersecurity, there is a real value in international collaboration around the larger scale risks of AI, he added. "It's the sort of thing where you can't go it alone in terms of trying to contain these threats."