Tracking Automated Government (TAG) Register
Development of the Tracking Automated Government ‘TAG’ Register began in November 2021, to keep track of and analyse identified forms of automated decision-making in government.
The TAG Register has three main functions:
To collate and compare examples of automated decision-making in government, including the level of transparency of each tool and any evidence of unequal impacts. This allows for a better understanding of the overall state of government automation and the key risks and challenges.
To set an example of what good transparency looks like when it comes to government use of automated decision-making. The Register was not built by government; it was pieced together by Public Law Project from information, hard-won through investigations by journalists or civil society organisations.
To ‘put our money where our mouth is’. Public Law Project believes that people have a right to easy-to-understand information about decisions affecting them, and that it is not fair or just to keep our knowledge about government algorithms to ourselves.
All information contained in the TAG Register is true to the best of our knowledge. Opacity is an inherent challenge of working in this area. We have attempted to find and collate information scattered across different sources and to give credit to those who have worked hard to obtain it, but we fully accept that there may be gaps. We feel sure that there are many more tools that we do not yet know about, and more information to be gathered about the tools we do know about. If you have any relevant information which is not currently included in the TAG Register, or if you would like to make a correction, please get in touch.
To trust that government automated systems work fairly, reliably, and lawfully, we need transparency and explainability; individuals should be able to understand how a system works, and how decisions that impact their lives are reached.
But until now there has been no systematic, public information about how and why public authorities use automated systems, or how they impact affected groups.
The Tracking Automated Government ‘TAG’ Project aims to bridge this gap.
By tracking and analysing examples of automated decision-making in government, the register can help us assess whether their use conforms to public law principles and operates in the interests of marginalised individuals, groups, and communities.
And if it doesn’t, individuals and organisations representing their interests will be better placed to challenge these decisions.
The TAG register is produced by Public Law Project, an independent national legal charity with the aim of improving access to public law remedies for marginalised individuals. One of its strategic priorities is ensuring that government use of new technologies is transparent and fair, specifically the growing use of automated systems to inform or make decisions in areas such as immigration, welfare, and policing.
Suggested citation: Public Law Project, Tracking Automated Government ‘TAG’ Register (9 February 2023) http://trackautomatedgovernment.org.uk/
We use the language of ‘automated decision-making’ or ‘ADM’, as opposed to ‘AI’ or ‘algorithms’. Public Law Project is specifically interested in the way public bodies use automated systems to make decisions. An automated decision is one in which an automated system performs at least part of the decision-making process. Automated systems can enter into government decision-making in a range of different ways:
Decision support: Where an automated system provides additional information to aid a human decision-maker in their decision (e.g., a system assesses whether an offender poses a risk of reoffending, and presents that risk score to a parole officer to inform their decision)
Streaming or triage: Where an automated system determines the type and quality of human judgement involved in a particular case (e.g., a system deems a visa application to be high risk, which means that the application is directed to a more senior official and subjected to more stringent scrutiny)
Fully (or ‘solely’) automated: Where an automated system takes a decision and action in relation to a person or group without human input (e.g., a system automatically assesses and approves an application for a driver’s licence)
‘The terms ‘AI’ and ‘algorithm’ cover technologies that perform some of the functions we would normally expect a human being to do, such as performing a task or solving a problem, but they are somewhat vague.
We have made the decision to refrain from using the term ‘BAME’ to identify anyone that is subject to racialisation. We will be as clear with our use of language when referring to impacted groups as the available data allows us to be. We acknowledge that ‘BAME’ is non-specific, lacks nuance, and does not leave room to recognise how minoritised individuals experience racism, in particular the unique experiences of racism associated with Black people. To provide an example, according to recent Home Office figures Black people are 9x more likely than white people to be stopped and searched by police, but ethnic minorities as a whole are 4x more likely.
However, some of the publications that we extract data from (namely government publications) use the term, and on occasion we are not able to break down the data behind the term, and thus have not been able to be offer more nuanced explanation. Where possible, we will continue to investigate data displayed in this manner to further breakdown the ‘BAME’ label and present data on race in the clearest way possible.
|
Category of information |
Explanation |
||||||||||||
|
Level of transparency |
In the simplified version of the TAG Register, we have given each tool a transparency rating. There are three possible transparency ratings: Low, Medium, and High.
A tool has a high level of transparency if:
There are two possible routes to a ‘medium’ transparency rating. A tool has a medium level of transparency if:
or
A tool has a low level of transparency if:
*For the purposes of these transparency ratings, crucial information includes:
|
||||||||||||
|
Makes decisions about individuals or groups of individuals |
All of the tools in the TAG Register have the potential to impact individuals or groups of individuals. However, some tools have a more direct human impact, whereas others are likely to influence decisions which could subsequently have human impact.
We have tried to capture the directness of the human impact by distinguishing between tools that are used to make decisions about individuals or groups of individuals and tools that are used for other purposes. We do, however, accept that there is a no distinct line between these two types of tools and, where necessary, we have made a judgement call in classifying a given tool. |
||||||||||||
|
Makes decisions that affect people's legal rights, entitlements, or similarly significant decisions |
This category aims to capture the severity of any human impact.
To assess the severity of the impact, we have adopted the definition from Article 22 of the UK General Data Protection Regulation which says that data subjects have the right not to be subject to solely automated decision-making which “produces legal effects concerning him or her or similarly significantly affects him or her”. We recognise that our chosen criteria — especially ‘similarly significant decision’ — are somewhat subjective and, where necessary, we have made a judgment call as to whether we think a tool meets the threshold. |
||||||||||||
|
Operational tags |
We have given the tools on the Register ‘operational tags’, to help you find tools that use particular techniques.
|
||||||||||||
|
Unequal impacts |
There is a lot of enthusiasm and optimism about how algorithms, automated tools, and AI systems can be used to make public decision-making quicker, faster, and cheaper. While much of this is can be true, automated decision making is also likely to make certain kinds of problems in administration more common, such as a lack of communication between those making the decisions and those subject to them. Lack of information about the impact of specific tools is concerning because the use of big data and automated decision-making tools can carry special risks of discrimination. For example, tools that are developed using historical data may reinforce existing prejudices. But the specific risks posed by a given tool cannot be properly understood without further information, like an evaluation or impact assessment.
We chose to label this category of information ‘unequal impact(s)’, rather than ‘discrimination’, as not all instances of unequal impact equate to ‘discrimination’ within the meaning of equality and public law (as defined in the Equality Act 2010 and the Human Rights Act 1998 in giving effect to the European Convention on Human Rights). Whilst some instances of tools having unequal impacts will equate to discrimination under the law, we will not make definitive statements until further information or analysis is available, or the tool is labelled as being discriminatory in the reasoning of a court or tribunal. |
||||||||||||
|
Information made available by the public body, inspector, or developer |
Information that has been published by the public body operating the tool, the relevant inspector of the area of policy or public body, or the (often private sector) developer of the tool.
We recognise that information disclosed in response to requests for information is often made available by the public body, but information in this category relates to information published independently of investigations only. |
||||||||||||
|
Investigatory methods |
The information in this register is the joint effort of many individuals, groups of individuals, and organisations who have carried out investigations in different manners, and for different purposes. The most common methods are freedom of information requests under the Freedom of Information Act (FOIA) 2000, and others are detailed in the dashboard. |
||||||||||||
|
Data Protection Impact Assessment (DPIA), Equality Impact Assessment (EIA), or other evaluation report |
A DPIA is a type of risk assessment required under Article 35 of the UK GDPR. It helps organisations to identify and minimise risks relating to personal data processing activities and are essential tool for ensuring that organisations do not deploy – and individuals are not subjected to – systems that may lead to unlawful, rights-violating, or discriminatory outcomes.
An EIA is a systemic and evidence-based tool for the public sector. It is an analysis of a proposed organisational policy, or a change to an existing one, which assesses whether the policy has a disparate impact on persons with protected characteristics, as defined under the Equality Act 2010. The Equality Act 2010 does not specifically require EIAs to be carried out, although they are a way of facilitating and evidencing compliance with the Public Sector Equality Duty (under s.149 Equalities Act 2010).
There are currently no specific risk assessments or evaluations required for the use of automated tools and algorithms, so public bodies may opt to carry out their own type of evaluation to capture the specific risks of a tool it operates. |
||||||||||||
|
Litigation |
Legal action taken in this context specifically regarding the use of, or an operational aspect of, an automated tool or algorithm. |
||||||||||||
Whether you offer front line support to people in the welfare, or immigration system, are a legal practitioner representing individuals in cases against the police, or a public body has recently made a decision about you - there is a high chance that you have encountered an element of automated decision-making (ADM).
The TAG Register can help you better understand how decisions that impact the people you work with, or those made about yourself, are reached. By bringing together information about known forms of automation in a user-friendly platform, the TAG Register makes information about automated decision-making accessible and understandable.
The information can be viewed in a ‘simplified’ or ‘detailed’ version, so you can see the information most relevant to what you want to know. If you are curious about one public body, or want to explore automated tools identified within specific policy areas, you can toggle the dashboard to easier navigate the site and find you are interested in.
Those with specific interests can use the search box at the top right of the Register. Any word or phrase that appears in the database can be found using that box. This will be useful if users are looking for a specific tool, policy area or developer.
Suggested citation: Public Law Project, Tracking Automated Government ‘TAG’ Register (9 February 2023) http://trackautomatedgovernment.org.uk/
The Register will be updated on an ongoing basis. Get in touch if you have any information to help build a better picture of how these tools work, would like us to investigate a suspected automated decision-making system, or you have thoughts and suggestions on increasing its usability.
The TAG Register’s development was led by Public Law Project researchers, Tatiana Kazim and Mia Leslie, who specialise in government use of new technologies. Bonavero Institute Student Fellow, Luca Montag provided assistance with the further development and update of the Register. Much valued input and assistance was provided by the wider team at Public Law Project.
Public Law Project works closely with the following organisations and is grateful for their research in this area, and their input into the content and design of the TAG Register: Lighthouse Reports, Privacy International, The Digital Constitutionalist, Amnesty Tech, Big Brother Watch, Child Poverty Action Group, and Kaelynn Narita (PhD Candidate at Goldsmiths, University of London) and Connie Hodgkinson Lahiff (PhD Candidate at University of East Anglia).
The Register will be updated on an ongoing basis. Get in touch if you have any information to help build a better picture of how these tools work, would like us to investigate a suspected automated decision-making system, or you have thoughts and suggestions on increasing its usability.