5 Filters

UK warned over lack of transparency on use of AI to vet welfare claims

"The UK government risks contempt of court unless it improves its response to requests for transparency over the use of artificial intelligence (AI) to vet welfare claims, the information commissioner has said.

Over the past two years, the Department for Work and Pensions (DWP) has increasingly deployed machine-learning algorithms to detect fraud and error in universal credit (UC) claims.

Ministers have maintained a veil of secrecy over the system, which the transparency campaign group Big Brother Watch has described as “seriously concerning”. The DWP has refused freedom of information requests and blocked MPs’ questions, arguing that providing information could help fraudsters.

Child poverty campaigners have said the impact on children of the AI tools could be “devastating” if benefits are suspended.

The information commissioner, John Edwards, has now warned the DWP it could be in contempt of court unless it changes its approach and spells out within 35 days the terms under which it could release more information.

In July 2022, the Guardian asked what information was fed to the algorithm to help it decide who might be cheating, which companies were involved, and for the results of a “fairness analysis” of disproportionate impacts on ethnic minorities, older and disabled people and any others with protected characteristics. It refused to comply with the request.

In a decision notice after an appeal by this newspaper, Edwards’ office said he was “disappointed that the DWP failed to consider the request on the basis of its clear, objective interpretation” and had “incorrectly” interpreted the request.

It also described as “unfortunate” the way the DWP had changed its reason for not releasing information 11 months after the request was first made, creating a fresh delay.

The DWP initially claimed that releasing information would harm the prevention or detection of crime and that it would prejudice commercial interests. Then in June it said it would cost too much to gather the information, such were the volumes of material about the AI system held on government computers.

The DWP recently expanded its use of AI from scanning claims for welfare advances to applications made by cohabitants, self-employed people and people applying for housing support, as well as to assess claimants’ savings declarations. One of the questions the DWP is refusing to answer is what historical fraud and error data is inputted to train the AI model. The system has shown some evidence of bias, according to the DWP’s own trials. Universal credit is used by 5.9 million people.

“Whilst reducing fraud in the benefit system is important, the uses of these tools have real-life implications for families in poverty who get caught up in their deployment,” said Claire Hall, the head of strategic litigation at Child Poverty Action Group. “The DWP must drop its culture of secrecy and provide meaningful reassurance of the fairness of these tools.”

Big Brother Watch, a civil liberties and privacy campaigning organisation, said the DWP was being “alarmingly, unjustifiably secretive”.

The group’s director, Silkie Carlo, said: “It is totally unacceptable to refuse legal requests for information about the use of powerful technologies that run a high risk of causing unfair and discriminatory impacts on some of our country’s most vulnerable people.
skip past newsletter promotion

“Algorithms in the welfare system tend to cast a wide net of digital suspicion and can be dangerously wrong, with serious harms. Government uses of AI should trigger much greater public transparency, not less. People have the right to understand how their information is being used and why decisions are made about them, rather than to be left at the mercy of opaque AI.”

UN experts have warned that extending the UK’s “digital by default” welfare system with machine learning, without greater transparency, risks creating serious problems for some benefit claimants.

In March the information commissioner warned the DWP to improve its broader handling of freedom of information requests. The commissioner complained of a “consistently poor level of performance” and found the department was often not properly reading the requests, that it persistently used standard templates when refusing requests and that it had recently been withholding more information.

It has ordered the DWP to set out how it might be able to respond to the Guardian’s specific request by 22 September.

A spokesperson for the DWP said: “We welcome that the commissioner agreed we are entitled to refuse to comply with the request on the basis on cost, as defined under the FoI Act. We have noted the points about providing further advice and assistance to the requester. The department takes its compliance with the Freedom of Information Act and Cabinet Office code of practice very seriously and keeps its approach to the publication of information under constant review.”" https://www.theguardian.com/politics/2023/sep/03/uk-warned-over-lack-transparency-use-ai-vet-welfare-claims

1 Like

One would think that one human right would be not to have your future determined by decisions taken by a non-human entity - as an opaque AI, without (at the very least) a human explaining it all on request, and in an accountable manner. Or…we’re being ruled by Bots.

1 Like