
You may have heard of Clearview AI; it is a rather controversial facial recognition firm that scrapes selfies and other personal data off the Internet without consent to feed an AI-powered identity-matching service it sells to law enforcement and other interested parties. So, unsurprisingly, it was only a matter of time before someone would react to its practices.
That someone is France’s privacy watchdog CNIL, which has already ordered Clearview AI to stop the unlawful processing of French citizens’ information and delete their data. That order came to deaf ears, and now the French watchdog is doing it again.
So, this isn’t Clearview’s first GDPR breach, with its infamous “catalog” of fines including:
- Unlawful processing of personal data (breach of Article 6 of the GDPR)
- Individuals’ rights not respected (Articles 12, 15 and 17 of the GDPR)
- Lack of cooperation with the CNIL (Article 31 of the RGPD)
“Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL wrote in a recent press release announcing the sanction.
“The chair of the CNIL therefore decided to refer the matter to the restricted committee, which is in charge for issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR.”
As a reminder, the EU’s GDPR allows for penalties of up to 4% of a firm’s worldwide annual revenue for the most serious infringements, or €20 million, whichever is higher.
The problem, however, is whether France will actually see a penny of this money from Clearview.
You see, the U.S.-based company has been issued with a slew of penalties by other data protection agencies across Europe in recent months, including €20M fines from Italy and Greece and a smaller penalty in the U.K. But it’s not clear it paid anything to these authorities, which themselves have limited resources (and legal means) to pursue Clearview for payment outside their own borders.
So the GDPR penalties look mostly like a warning to stay away from Europe.
On its end, Clearview’s PR agency LakPR Group sent the following statement, attributed to CEO Hoan Ton-That, to TechCrunch:
There is no way to determine if a person has French citizenship, purely from a public photo from the internet, and therefore it is impossible to delete data from French residents. Clearview AI only collects publicly available information from the internet, just like any other search engine like Google, Bing or DuckDuckGo.
The statement goes on to reiterate Clearview’s earlier claims that it does not have a place of business in France or in the EU nor undertake any activities that would “otherwise mean it is subject to the GDPR.” Instead, they argue that Clearview AI’s database of publicly available images is “lawfully collected, just like any other search engine like Google.”
For what it matters, the GDPR has extraterritorial reach, so Clearview’s former arguments are meaningless, while its claim it’s not doing anything that would make it subject to the GDPR looks absurd given its amassed database of over 20 billion images worldwide, including images of many Europeans.
“We only collect public data from the open internet and comply with all standards of privacy and law. I am heartbroken by the misinterpretation by some in France, where we do no business, of Clearview AI’s technology to society. My intentions and those of my company have always been to help communities and their people to live better, safer lives,” concludes Clearview’s PR.
This is the pattern for Clearview – each time it receives a sanction from an international regulator, it denies the wrongdoing adding that the foreign body doesn’t have any jurisdiction over its business.
In the meantime, Sweden — part of the EU — has fined a local police authority for unlawful use of Clearview, helping at least clamp down on any local demand.
Things aren’t that smooth in Clearview’s home turf of the U.S. either. Earlier this year, the company agreed to settle a lawsuit that had accused it of running afoul of an Illinois law banning the use of individuals’ biometric data without consent.
At that time, Clearview agreed to some limits on its ability to sell its software to most U.S. companies. However, they managed to find a way to circumvent the ruling by selling its algorithm, rather than access to its database, to private companies in the U.S.
So, where do we go from here? Some argue we need to empower regulators to order the deletion or even market withdrawal of algorithms trained on unlawfully processed data. This could be one solution, and the EU’s upcoming AI Act may contain such a power.
Otherwise, collectively, we could be heading to AI dystopia where even the VPN can’t help us a lot. That being said, you do need one — if you’re still not using it.