More research into computer algorithms is needed as they could have gender or race biases, the government has warned.
It announced independent watchdog the Centre for Data Ethics and Innovation (CDEI) will investigate algorithms used in the justice and financial systems.
But services using the artificial intelligence already, such as predictive policing, will continue.
Human rights group Liberty said it did not make sense to acknowledge the risk and not halt current programs.
“In launching this investigation, the government has acknowledged the real risk of bias when relying on predictive policing programs powered by algorithms. So why are they already being rolled out by police forces across the country?” asked Hannah Couchman, policy officer at Liberty.
“We should all be troubled by the silent expansion of the use of opaque algorithmic tools and the clear impact they have on our fundamental rights.”
A spokesman for the Department for Digital, Culture, Media and Sport, which launched the inquiry, told the BBC: “We know there is potential for bias but that is not the same as admitting that there are flaws in the system already.”
- Police use of crime prediction tech grows
- Is artificial intelligence racist?
- Police to test app that assesses suspects
The government has not said whether algorithms currently in use are affected by bias issues.
But the CDEI will work with the Cabinet Office’s Race Disparity Unit to explore the potential for bias in algorithms designed for crime and justice.
It will also look at potential bias in algorithms used in finance to make decisions such as whether to grant individuals loans and those used in recruitment, which can screen CVs and influence the shortlisting of candidates.
Crime prediction software has already been adopted by at least 14 police forces in the UK, according to freedom of information requests by Liberty.
They fall into two types – predictive mapping of crime hotspots and risk assessments of individuals to try to work out who is more likely to commit an offence or become a victim of crime.
In Durham, the Harm Assessment Risk Tool is being used to assist police officers in deciding whether an individual is eligible for deferred prosecution based on the future risk of offending.
And Avon and Somerset Police use a system known as Qlik, a data visualisation system that helps it decide where to put police officers.
The force previously told the BBC that it made “every effort to prevent bias” with data not including ethnicity, gender or demographics.
Luka Crnkovic-Friis, co-founder of the Swedish AI Council, told the BBC: “Because AI is trained by people, it’s inevitable that bias will filter through.
“Automation tools are only ever as good as the data fed into them, so when using historical data where there is a strong human bias – such as race, re-offending rates and crime – there certainly is a risk that the results could produce bias and the government is right to take steps to account for this.”
AI expert Dave Coplin, chief executive of consultancy The Envisioners, suggested what the CDEI should be investigating.
“We need to make sure that the CDEI is as focused on where it [artificial intelligence] is being used in government today as well as the further challenges that tomorrow’s usage may bring,” he told the BBC.
Artificial intelligence: Algorithms face scrutiny over potential bias