This summer will be remembered as the moment that facial recognition technology was, suddenly, everywhere. From British shoppers in King's Cross to Hong Kong protesters, people around the world have found themselves under the watchful scrutiny of artificial intelligence.
Last week a UK court found that police trials of facial recognition technology in Welsh public spaces did not contravene either human rights or data protection laws. This will come as a relief to British police forces which are keen to deploy the technology, despite the lack of a clear government mandate to do so. Yet the enthusiasm is difficult to square with the proven fallibility of current machines, which have been shown to be inaccurate, unreliable and biased. It often fails to do what it says on the tin — that is, recognise faces, particularly if those faces belong to women and people of colour.
While the courts may have endorsed facial recognition technology, the British public do not. A new survey by the Ada Lovelace Institute found that 55 per cent of people wanted government restrictions on police use of the technology. Respondents were also uncomfortable with its commercial use — only 17 per cent wanted to see facial recognition technology used for age verification in supermarkets, 7 per cent approved of its use for tracking shoppers and 4 per cent thought it was appropriate for screening job candidates.