Are AI-Powered Systems Making UK Police More Efficient or Biased and Racist?

“Facial recognition technology has proven to be discriminatory against communities of colour.”
While EU countries are moving toward banning the use of facial recognition technology in the public sector, the UK stands out for its expanding reliance on this controversial technology.
British authorities justify this use for security and crime-fighting needs, while human rights groups warn that AI-powered police systems, such as Nectar, pose a threat to privacy due to the unsupervised collection of sensitive data.
This system also faces further criticism for treating Britons as suspects without a clear legislative basis, and its use in stores raises concerns about personal data and its impact on public life.
These developments in the UK raise serious questions about the balance between national security and individual privacy in an increasingly tech-dependent society, making the need for clear legislation and strong safeguards to protect citizens' rights more urgent than ever.
Police Technology
From supermarkets to mass festivals, the faces of millions of Britons are being scanned and monitored in real time on a massive scale by facial recognition technology.
Through these measures, the Metropolitan Police aim to identify and arrest wanted individuals by comparing faces in crowds with a database of thousands of suspects.
When a person on a police watch list passes near the cameras, the AI-powered system issues an alert.
Met Police Commissioner Mark Rowley emphasized that this tech is an effective policing tool, pointing to its success in identifying criminals in crime hotspots, resulting in more than 1,000 arrests since the beginning of 2024.
Facial recognition cameras were first trialled in London and south Wales in 2016 and 2017 respectively, but the speed at which police forces are rolling out the technology has accelerated over the last 12 months.
According to the NGO Liberty, police forces scanned nearly 4.7 million faces with live facial recognition cameras last year, more than twice as many as in 2023.
The Home Office is currently working with the police to establish a new national facial recognition system, known as ‘Strategic Facial Matcher’.
The platform will be capable of searching a range of databases including custody images and immigration records.
The Met is currently deploying its facial recognition cameras at a setting that testing suggests avoids any statistically significant gender or ethnicity bias when it comes to cases of misidentification.

Community Concern
The recent massive data collection on the streets of England and Wales has raised concerns about turning society into a nation of suspects, according to Big Brother Watch.
The use of these technologies by supermarkets and clothing stores to combat theft has raised similar concerns, due to the lack of information available about how the data is used.
The majority of these stores use the FaceWatch app, which compiles a list of suspicious individuals in the stores it monitors and issues an alert if someone enters.
Human rights organizations have warned that Britain's expansion of facial recognition technology resembles the practices of authoritarian states like China more than it reflects Western democratic standards.
The UK's Human Rights Commission said the Met Police's policy of using this tech is unlawful because it contravenes articles of the European Convention on Human Rights relating to the rights to privacy, freedom of expression, and freedom of assembly.
Amnesty International UK said the plans should be immediately scrapped, with facial recognition proven to be discriminatory against communities of colour.
11 groups, including Human Rights Watch, wrote to the Met Police Commissioner urging him not to use the tech during the Notting Hill Carnival.
They accused him of unfairly targeting the African-Caribbean community and highlighted the racist biases of artificial intelligence.
Others have also filed legal cases against the platform's use, the most recent being the case of Shaun Thompson, a 39-year-old black man living in London who said he was wrongly arrested as a criminal by one of these cameras, prompting him to sue the police.

Unlike the UK, EU legislation regulating AI has prohibited the use of real-time facial recognition technology since February 2025, except in cases such as counterterrorism.
Home Secretary Yvette Cooper has defended the expanded use of the technology to apprehend serious criminals.
The Met Police recently announced an increase in the use of the Live Facial Recognition system (LFR) to 10 times a week over five days, up from four times a week previously.
Police said there are strong safeguards in place, with biometric data automatically deleted unless there is a match.
Despite these promises, the Home Office this month authorized police to use the technology in seven new areas.
The cameras are usually mounted in a police van, but permanent cameras are set to be installed in Croydon, south London, for the first time next month.
The cameras are usually installed in a police van, but for the first time, permanent cameras are scheduled to be installed in the Croydon area of south London next month.
The expansion comes despite facial recognition failing to be referenced in any act of Parliament.
Campaigners claim the police have so far been allowed to self-regulate their use of the technology.
Officers have previously used facial recognition at a setting that has been shown to have a bias for misidentifying black people, rather than at a more precise threshold.

Racial Discrimination
Separately, British police are showcasing an experimental system developed using AI as a technological tool to enhance security efficiency, amid growing concerns that it could be used to perpetuate racial discrimination against people from Arab, Muslim, or Mediterranean backgrounds, whether as victims or suspects.
The system, named Nectar, has been developed in collaboration with Palantir Technologies, a US tech giant co-founded by Peter Thiel, a donor to Donald Trump and close advisor during his first term as US president.
The controversial system allows for the analysis of sensitive data such as race, religion, political affiliations, and health history, prompting activists to describe it as institutional bias.
Nectar is designed to integrate dozens of policing databases into a single, AI-assisted platform that can create detailed digital profiles of individuals.
According to internal documents, the system is already in use in Bedfordshire and is being considered for wider deployment nationally.
Bedfordshire Police confirmed that Nectar is still in a narrow pilot phase, used within mechanisms such as Clare's Law against domestic violence, and relies solely on stored data. They announced the formation of an ethics committee to review the system's use.
US-based Palantir, the company behind the Nectar software platform, indicated that the system speeds up criminal record checks by 85% and improves accuracy.
The Home Office emphasized that any use of AI must be legal, transparent, and ethical, supported by strict governance frameworks.
While Bedfordshire Police and Palantir both insist that Nectar only accesses data already held by the police and does not provide new or predictive data, privacy campaigners and senior MPs have raised alarm over the extent of information being processed.
Shadow Home Secretary David Davis called for urgent parliamentary scrutiny of Nectar and its legal underpinnings, warning that such tools risk appropriating powers beyond their remit.
“There are lots of reasons to be concerned by this software. It raises multiple issues – from data deletion to innocent people being flagged by mistake,” he said.
Baroness Helen Newlove, the Victims' Commissioner for England and Wales, warned that Nectar could reintroduce practices long described as dehumanizing by requiring victims to disclose details of their private lives and medical history, rather than focusing on the behavior of perpetrators.
Rebecca Hitchen, head of policy and campaigns at the End Violence Against Women Coalition, said there were serious concerns about the Nectar system.
Amnesty International UK said the Nectar system amounted to a form of indiscriminate mass surveillance.
Data from the UK Ministry of Justice for 2023 indicated that the arrest rate for people of Asian backgrounds, including those of Arab origin, was 10.3 per 1,000 people, compared to just 5.9 for white people, revealing a clear gap in police treatment based on ethnicity.
The British organization Stopwatch warned in its 2024 report that racial disparities in police treatment are not random, but rather reflect deeply ingrained systems of discrimination within the security architecture, currently being reinforced by new technological tools.

Given its role in state surveillance and its close ties to US defence and intelligence agencies, concerns have been mounting in recent years about Palantir’s growing influence in the UK’s public sector.
In 2023 a government decision to award it a secretive £330 million contract to build and manage a new data platform for the NHS triggered a backlash over its access to sensitive patient records.
In an article published in the New Statesman in March 2025, journalist Andrew Marr warned that Palantir's expanding presence in UK public services reflects a broader shift towards privately controlled data infrastructure with little democratic oversight.
He argued that Palantir’s deepening integration into British institutions risks concentrating immense power in the hands of unelected technocrats and foreign shareholders, while eroding public trust and accountability.
Sources
- Facial recognition vans to be rolled out across police forces in England
- Live facial recognition cameras may become ‘commonplace’ as police use soars
- Police use controversial AI tool that looks at people’s sex lives and beliefs
- Bedfordshire Police using AI tool to profile political views, sex life, race and health data
- Rape victims could have health data and sexual history trawled by police AI tool