Last week, the UK Information Commissioner’s Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for “using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition”. The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems.
Since Clearview AI is not exactly a household name some background might be helpful. It’s a US outfit that has “scraped” (ie digitally collected) more than 20bn images of people’s faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database. The app produces a list of images that have similar characteristics to those in the photo provided by the customer, together with a link to the websites whence those images came. Clearview describes its business as “building a secure world, one face at a time”.
The fly in this soothing ointment is that the people whose images make up the database were not informed that their photographs were being collected or used in this way and they certainly never consented to their use in this way. Hence the ICO’s action.
Most of us had never heard of Clearview until January 2021 when Kashmir Hill, a fine tech journalist, revealed its existence in the New York Times. It was founded by a tech entrepreneur named Hoan
Read more on theguardian.com