Re:View

The Week 6 September 2024

Joe Hill
Policy Director

In Westminster, ‘term time’ is back for a frenetic few weeks before Parliament breaks up again for party conferences. The news has understandably been dominated by the Grenfell Inquiry’s final report (seven years later…) and the shocking revelations of institutional failure.

Looking to the future, it has been a big week for AI in Britain. In an (error-strewn) press release the Ministry of Justice announced that Britain has signed the Council of Europe’s AI Convention, described as the world’s first ‘legally binding’ treaty on AI. The convention includes a set of requirements to make governments and private companies demonstrate their AI complies with the principles of human rights, equality and reliability, along with wide-ranging prescriptions to guarantee these principles will be applied in practice.

It’s hard to see how this aligns with the government’s plan to only legislate for highly-targeted AI regulation in this Parliament (described as “not a Christmas tree bill” by DSIT Secretary of State Peter Kyle), by adding such a broad set of requirements to a much longer list of public and private companies. The AI Bill was intended to focus mainly on making the existing voluntary agreements with the ‘hyperscaler’ companies (e.g. OpenAI and Anthropic) legally binding.

Also this week, Peter Kyle spoke in Parliament regarding the importance of technology to improve public services, saying “Technology is much more than just another sector to support… it is the foundation for every one of our national missions”. It has distinct echoes of Harold Wilson’s “white heat of technology” vision, 61 years ago.

With impeccable timing, we published ‘Getting the machine learning’, our report on turbocharging state adoption of AI, the day after his speech! It has nineteen specific recommendations for Kyle to consider when reviewing Matt Clifford’s upcoming AI Opportunities Action Plan, due later this month. My favourite of those 19 is Recommendation 16, giving parity between the risk of using AI to automate a process, and humans continuing to do that process. Public services are littered with failures of human decision-making, we should only expect AI to be better, not perfect.

Re:State aren’t the only ones writing about AI this week. James Plunkett published his thoughts on what previous eras of digital adoption teach us about AI, arguing similar themes to us on the lessons we can learn from GDS from 2011 onwards. Geoff Mulgan a member of our Reimagining Whitehall Steering Group, wrote about how to get diffusion of AI technology throughout the economy, and Stian Westlake wrote his thoughts on repurposing science and technology for the Government’s missions, including using AI for the synthesis of complex evidence in the fast-moving world of government.

Onto the table of the week

Continuing the theme of AI (and given how many reads we’ve already given you this week), this table showing the disparity in compute access between different countries paints a pretty stark picture. Whilst the UK is world-leading in much of our AI research, after the US and China, the infrastructure we have locally do develop generative models lags many other countries — we have fewer GPU-enabled regions than Japan, South Korea, and Singapore. Paper here.