Global menu

Our global pages


Pioneering standard for algorithmic transparency launched by UK Government

  • United Kingdom
  • Privacy, data protection and cybersecurity
  • Technology


The Central Digital and Data Office (CDDO) has launched an algorithmic transparency standard for government departments and public sector bodies. The standard, which delivers on commitments made in the National AI Strategy and National Data Strategy, will be piloted by several public sector organisations.

The aim of the standard (allegedly, one of the world’s firsts) is to strengthen the UK’s position as a world leader in AI governance and to enable transparency about how algorithmic tools are used to support decision making, especially where decision outcomes will have a legal or economic impact on the individuals affected. This is intended to promote trustworthy innovation by enabling better visibility over the use of algorithms.

The standard includes provision for providing relevant information in a “complete, open, understandable, easily-accessible, and free format”. Relevant bodies will be required to provide information to the CDDO about algorithmic tools used and their utility, which will be published in the Algorithmic Transparency Standard Collection. This will allow experts and the public to engage with and scrutinise the data provided.

Key Takeaways

The standard is made up of an (i) Algorithmic Transparency Data Standard and (ii) Algorithmic Transparency Template and Guidance. The standard has two tiers: (1) a short description, including how and why the tool is being used; (2) more detailed information about how the tool works, the dataset(s) used to train the model and the level of human oversight. In completing the Algorithmic transparency template, the body is required to provide information relating to both tier one and tier two.

It is currently a pilot, following which the CDDO will review the feedback received and seek formal endorsement from the Data Standards Authority in 2022.

It will be important to consider (amongst other things):

  • how this sits with the requirements under GDPR to manage automated decision making
  • whether you have the requisite rights to data used to train the model, and
  • who owns the resulting secondary and tertiary data sets and, in fact, the model itself