Skip to content

Commit

Permalink
Added The Uks Ai Opportunities Action Plan Somewhat Quiet On Risks
Browse files Browse the repository at this point in the history
  • Loading branch information
giphybot authored and Siteleaf committed Jan 22, 2025
1 parent b58ec53 commit 4e43312
Showing 1 changed file with 132 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
---
title: The UK’s AI Opportunities Action Plan – somewhat quiet on risks
date: 2025-01-22 12:51:00 Z
categories:
- Artificial Intelligence
summary: Last week the UK government launched their 50-point AI Opportunities Action
Plan. The plan is ambitious, but it is something of a mixed bag. Some sizeable and
worthwhile investments, alongside others which are quite questionable. But what
I am more concerned with is what is missing. The plan has optimistic, upbeat and
pro-innovation, but is rather silent on the risks.
author: ceberhardt
---

Last week the UK government launched their 50-point [AI Opportunities Action Plan](https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan). I’m going to ignore the marketing hyperbole (they’re going to [“mainline AI into the veins of this enterprising nation”](https://www.gov.uk/government/news/prime-minister-sets-out-blueprint-to-turbocharge-ai) - seriously?!) and concentrate more on the substance ... and there is a lot of it. The plan is ambitious, but it is something of a mixed bag. Some sizeable and worthwhile investments, alongside others which are quite questionable. But what I am more concerned with is what is missing. The plan has optimistic, upbeat and pro-innovation, but is rather silent on the risks.

## A plan built on optimism

The plan itself is made up of 50 points, clustered around three main themes, (1) Creating the foundations and enabling AI, (2) Changing lives through AI, via practical applications, (3) Securing our future through home-grown AI.

Delving into the detail, the more significant of the ambitions are to:

* Create ‘AI Growth Zones’ to speed up planning approvals and provision of power


* Create a National Data Library, to unlock both public and private data


* Increase the number of AI graduates, increase diversity and create pathways for lifelong learning


* Reform the UK text and data mining regime, to address some of the copyright-related challenges


* Adopt a “Scan > Pilot > Scale” approach in government


* Create a new UK Sovereign AI unit, to ensure we have a stake in frontier AI


* A 20x increase in investment in the AI Research Resource, whose main focus is the funding of supercomputers

The overall theme of this action plan reflects the [pro-innovation approach to regulation](https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper) which the government outlined less than a year ago and also reflects the views (and risk profile) of its author, Matt Clifford, who has a track record as an entrepreneur. As a result, it very much focusses on the positives whilst downplaying the potentials risks. This is something I want to address in more detail, but first, I think we need to take a step back and reflect on the current state of AI.

## A recent history of AI

Artificial Intelligence (AI) isn’t a new field, it is one that has had active research, investment and successes for many decades. The reason why it is getting so much attention is due to the creation of Large Language Models, which are a form of Generative AI (GenAI), that are the technology behind products such as ChatGPT.

Based on ideas first shared by Google (in their paper somewhat cutely called “Attention is all you need”), and further developed by OpenAI, this technology resulted in a significant step forwards in AI capability. Notably, this AI is general purpose (able to tackle a wide range of tasks) and driven by human language. Most people started to wake up to the power of this new breed of AI when ChatGPT was released in late 2022.

Since then, many other organisations have been racing to replicate OpenAIs success, with much success, while others have been racing to create new products and services fuelled by new AI capabilities.

Recent press has been dominated by the pursuit of “Artificial General Intelligence” (AGI). OpenAI and its peers, are furiously racing to create the most sophisticated AI models possible and claim that they have created the first AI that is truly intelligent. The amount of money being invested in this pursuit (much of which is spent on compute) is mind-boggling, with the investors expectation that the first company to achieve AGI will reap colossal revenues.

However, while the AI companies seem somewhat preoccupied with the pursuit of AGI, those of us with more modest goals, who simply wish to do something useful with AI, are grappling with all manner of practical issues, including some fundamental challenges with the current AI models themselves (hallucinations, bias, etc ...)

I firmly believe the current tranche of AI will deliver genuine value, but it is going to take time to solve the practical challenges.

The AI Opportunities Action Plan needs to be assessed with the current AI limitations and challenges in mind, as well as the opportunities.

## AI Growth Zones

A significant number of the 50 plan points relate to compute infrastructure - data centres and supercomputers, with an initial pilot AI Growth Zone to be located in Culham, Oxfordshire. This is an appealing prospect for the government as it would create numerous jobs beyond technologist.

Training and developing AI models requires a significant amount of compute power, and those who are chasing the AGI ‘prize’ certainly need these scarce resources. However, there is also a growing trend for the development of smaller, highly capable, AI models that can run on your laptop or mobile phone. There is already more than enough compute power to support a wide range of practical AI applications.

Pursuing AGI is a risky and expensive bet (will we ever achieve it? what value will it deliver? do we even know what it is?), and something I feel is best left to private companies.

We should certainly have suitable compute facilities to support research activities, but I don't feel there is enough evidence at the moment to support a significant increase in the overall compute power our nation needs. Furthermore, the environmental impact of so many private companies training ever-larger models in pursuit of AGI cannot be ignored.

## The National Data Library (NDL)

The action plan quite rightly calls out the pivotal role data has in AI, both for those developing new models and for the greater number of people developing applications that incorporate AI.

Plans for the NDL were first aired a few months ago, and the Department for Science, Innovation and Technology (DSIT) has already confirmed that it is [working towards this goal](https://www.ukauthority.com/articles/dsit-confirms-work-on-national-data-library/). The action plan looks to seed this development by “identify at least 5 high-impact public datasets it will seek to make available to AI researchers and innovators”.

As we go about our daily lives we generate vast quantities of data, spanning medical, education, travel, the environment and much more. Making this data available via public datasets is a fantastic way to fuel both product and academic innovation. Our data should be open by default.

Increase the number of AI graduates

Creating value with Generative AI requires skills that most people lack – in fact, the skills themselves are only just being formalised. It has already given rise to a new discipline, prompt engineering, and there will be more to come.

The action plan quite rightly points out that we don't have a good view of the skills gap, referencing the most recent research which [Ipsos Mori conducted in 2019](https://assets.publishing.service.gov.uk/media/60992b39d3bf7f2888d18f84/DCMS_and_Ipsos_MORI_Understanding_the_AI_Labour_Market_2020_Full_Report.pdf), before any of us had heard of GenAI.

I very much support the expansion of pathways into AI careers and increased diversity called out in the action plan. However, if the transformational effect of AI is going to be as great as some are already predicting, everyone needs access to suitable training. The action plan does touch on this in relation to the lifelong skills programme.

## Update copyright laws (in favour of the AI companies)

The highly capable Generative AI models that have emerged in the past few years almost all share a shady background that their creators would rather not talk about. They have been trained on vast quantities of copyright data, without seeking permission from the original publisher or copyright holder.

The creators of these models tend to [argue that this is fair use,](https://copyrightblog.kluweriplaw.com/2024/02/29/is-generative-ai-fair-use-of-copyright-works-nyt-v-openai/) a legal mechanism that permits ‘transformative’ use of copyright materials. Unfortunately, these laws were created years before Generative AI and no longer seem fit for purpose. I don’t want to linger on this topic too long, but it doesn’t seem fair to me that, for example, photographers are [now losing stock photography royalties to AI models](https://www.stockperformer.com/blog/is-ai-killing-the-stock-industry-a-data-perspective/) that were trained on their images without their permission.

So what does this all have to do with the AI Action Plan?

In the section on “enabling safe and trusted AI development”, the plan seeks to “reform the UK text and data mining regime so that it is at least as competitive as the EU”. This alludes to the 2019 changes to the EU Copyright Directive that “allows organisations to use Text Data Mining (TDM) for other purposes, provided the copyright holder has not explicitly opted out.”

I think this is fundamentally the wrong decision.

Putting aside the complexity of creating an opt-out process in the first place, the copyright holder should not bear the burden of opting out of a process that they may not be aware of or even technically understand. This is clearly putting the interests of the AI industry ahead of many hundreds of thousands of small-time creatives.

Elsewhere the section on safe and trusted AI development is equally lacking. Most of the recommendations relating to regulation seem more concerned with ensuring the regulators make good use of AI themselves rather than more meaningfully tackle the ever-growing challenges.

In contrast, the EU AI Act has a stronger regulatory focus, for example ensuring that [AI-generated content is watermaked](https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)757583) to tackle misinformation.

## Scan > Pilot > Scale

To drive AI adoption within government, the Action Plan outlines a three-stage process:

* **Scan** - Investing in building a deep and continually updated understanding of AI


* **Pilot** - Rapidly developing prototypes or fast light-touch procurement to spin up pilots in high-impact areas, robust evaluation and publishing results.


* **Scale** - Identifying successful pilots that can be applied in different settings to support citizens

The 12 action plan points that relate to the above are all deeply practical, covering leadership, partnering, procurement and fostering a culture of re-use through open source.

Given that most organisations are currently struggling to capitalise on the amazing advances we’ve seen in AI in the last few years, the outlined approach, with a focus on prototypes and pilots, makes a lot of sense.

## Creating a new UK Sovereign AI unit

The final section of the Action Plan addresses how we “secure our future with homegrown AI” and starts with various dramatic predictions.

The narrative opens with reference to scaling laws, which are already failing (hence a recent focus on inference scaling), and the rise of agentic AI, which has yet to demonstrate practical benefits. I’m going to take the action plan narrative with a pinch of salt, but the overall sentiment that we should be creating our own AI technology is something I am on board with.

The proposed UK Sovereign AI unit will be a public-private collaboration that combines funding, access to data and compute to attract AI companies to operate within the UK.

## Final thoughts

Given the potential economic benefits, it’s important that the UK government has a strong message around AI. This Action Plan contains a lot of substance relating to skills, data, compute and more that will help us reap the economic rewards.

However, there is also a lot that is missing from this plan. Most notably actions that address the risks that we know now, and those that will almost certainly emerge in the very near future. It is understandable that the government wants to ride this wave of innovation and create an environment where private (AI) businesses invest in the UK, but it cannot be at the cost of the rights, or opportunities, of UK citizens.

0 comments on commit 4e43312

Please sign in to comment.